@@ -57,4 +57,4 @@ Old, depcreated interface, which will be deleted at some point: Please use \fBel
...
@@ -57,4 +57,4 @@ Old, depcreated interface, which will be deleted at some point: Please use \fBel
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
@@ -59,4 +59,4 @@ Old, depcreated interface, which will be deleted at some point: Please use \fBel
...
@@ -59,4 +59,4 @@ Old, depcreated interface, which will be deleted at some point: Please use \fBel
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.br
.SH DESCRIPTION
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_complex_1stage\fP(3) or \fBelpa_solve_evp_complex\fP(3).
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.br
The interface \fBelpa_solve_evp_complex\fP(3) is a more flexible alternative.
.RI "With the definintions of the input and output variables:"
.RI "With the definintions of the input and output variables:"
.br
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.br
.RI "int\fBnev\fP:number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.RI "double complex *\fBa\fP:pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.br
.RI "double complex *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.RI "double*\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.RI "double complex *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.br
.RI "double complex *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.RI "int \fBTHIS_ELPA_COMPLEX_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.br
.RI "char *\fBmethod\fP: use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto" "
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
.SH DESCRIPTION
Solve the complex eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the complex eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.br
The interface \fBelpa_solve_evp_complex\fP(3) is a more flexible alternative.
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.br
.SH DESCRIPTION
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_real_1stage\fP(3) or \fBelpa_solve_evp_real\fP(3).
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.br
The interface \fBelpa_solve_evp_real\fP(3) is a more flexible alternative.
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.br
.RI "integer, intent(in), optional \fBTHIS_ELPA_REAL_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "logical, intent(in), optional: \fBuseQR\fP: optional argument; switches to QR-decomposition if set to .true."
.RI "logical, intent(in), optional: \fBuseQR\fP: optional argument; switches to QR-decomposition if set to .true."
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.RI "logical \fBsuccess\fP: return value indicating success or failure"
...
@@ -58,36 +60,40 @@ use elpa2
...
@@ -58,36 +60,40 @@ use elpa2
.RI "With the definintions of the input and output variables:"
.RI "With the definintions of the input and output variables:"
.br
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.RI "double *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.br
.RI "double *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.RI "double *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.br
.RI "double *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.RI "int \fBTHIS_ELPA_REAL_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.br
.RI "int \fBuseQR\fP: if set to 1 switch to QR-decomposition"
.RI "int \fBuseQR\fP: if set to 1 switch to QR-decomposition"
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
.SH DESCRIPTION
Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.br
The interface \fBelpa_solve_evp_real\fP(3) is a more flexible alternative.