Commit 72d1c50c authored by Andreas Marek's avatar Andreas Marek

Man pages for elpa_solve_evp_real and elpa_solve_evp_complex

parent fe11440b
......@@ -23,5 +23,5 @@ A. Marek, MPCDF
.SH "Reporting bugs"
Report bugs to the ELPA mail elpa-library@mpcdf.mpg.de
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real_double\fP(3) \fBelpa_solve_evp_real_single\fP(3) \fBelpa_solve_evp_complex_double\fP(3) \fBelpa_solve_evp_complex_double\fP(3) \fBelpa_solve_evp_complex_single\fP(3) \fBsolve_evp_real_double\fP(3) \fBsolve_evp_rea[M@ƒ>l_single\fP(3) \fBsolve_evp_complex_double\fP(3) \fBsolve_evp_complex_single\fP(3) \fBsolve_evp_real_2stage_double\fP(3) \fBsolve_evp_real_2stage_single\fP(3) \fBsolve_evp_complex_2stage_double\fP(3) \fBsolve_evp_complex_2stage_single\fP(3)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real_double\fP(3) \fBelpa_solve_evp_real_single\fP(3) \fBelpa_solve_evp_complex_double\fP(3) \fBelpa_solve_evp_complex_single\fP(3) \fBelpa_solve_evp_real_1stage_double\fP(3) \fBelpa_solve_evp_real_1stage_single\fP(3) \fBelpa_solve_evp_real_2stage_double\fP(3) \fBelpa_solve_evp_real_2stage_single\fP(3) \fBelpa_solve_evp_complex_2stage_double\fP(3) \fBelpa_solve_evp_complex_2stage_single\fP(3)
......@@ -57,4 +57,4 @@ Old, depcreated interface, which will be deleted at some point: Please use \fBel
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -59,4 +59,4 @@ Old, depcreated interface, which will be deleted at some point: Please use \fBel
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -45,7 +45,8 @@ use elpa1
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_complex_1stage\fP(3) or \fBelpa_solve_evp_complex\fP(3).
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real_1stage_double\fP(3) \fBsolve_evp_real_1stage_single\fP(3) \fBsolve_evp_complex_1stage_single\fP(3) \fBsolve_evp_real_2stage_double\fP(3) \fBsolve_evp_real_2stage_single\fP(3) \fBsolve_evp_complex_2stage_double\fP(3) \fBsolve_evp_complex_2stage_single\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real_1stage_double\fP(3) \fBelpa_solve_evp_real_1stage_single\fP(3) \fBelpa_solve_evp_complex_1stage_single\fP(3) \fBelpa_solve_evp_real_2stage_double\fP(3) \fBelpa_solve_evp_real_2stage_single\fP(3) \fBelpa_solve_evp_complex_2stage_double\fP(3) \fBelpa_solve_evp_complex_2stage_single\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -84,5 +84,7 @@ use elpa1
.SH DESCRIPTION
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
The interface \fBelpa_solve_evp_complex\fP(3) is a more flexible alternative.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -12,7 +12,7 @@ use elpa1
use elpa2
.br
.br
.RI "success = \fBsolve_evp_real_2stage\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, THIS_REAL_ELPA_KERNEL)"
.RI "success = \fBsolve_evp_real_2stage\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, THIS_COMPLEX_ELPA_KERNEL=THIS_COMPLEX_ELPA_KERNEL)"
.br
.RI " "
.br
......@@ -43,6 +43,8 @@ use elpa2
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "int \fBTHIS_ELPA_COMPLEX_KERNEL\fp: choose the compute kernel for 2-stage solver"
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
......@@ -51,7 +53,7 @@ use elpa2
#include <complex.h>
.br
.RI "success = \fBsolve_evp_complex_2stage\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_REAL_KERNEL);"
.RI "success = \fBsolve_evp_complex_2stage\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_COMPLEX_KERNEL);"
.br
.RI " "
.br
......@@ -82,10 +84,16 @@ use elpa2
.br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "int \fBTHIS_ELPA_COMPLEX_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "char *\fBmethod\fP: use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto" "
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Solve the complex eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
The interface \fBelpa_solve_evp_complex\fP(3) is a more flexible alternative.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -45,7 +45,8 @@ use elpa1
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_real_1stage\fP(3) or \fBelpa_solve_evp_real\fP(3).
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real_1stage_single\fP(3) \fBsolve_evp_complex_1stage_double\fP(3) \fBsolve_evp_complex_1stage_single\fP(3) \fBsolve_evp_real_2stage_double\fP(3) \fBsolve_evp_real_2stage_single\fP(3) \fBsolve_evp_complex_2stage_double\fP(3) \fBsolve_evp_complex_2stage_single\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real_1stage_single\fP(3) \fBelpa_solve_evp_complex_1stage_double\fP(3) \fBelpa_solve_evp_complex_1stage_single\fP(3) \fBelpa_solve_evp_real_2stage_double\fP(3) \fBelpa_solve_evp_real_2stage_single\fP(3) \fBelpa_solve_evp_complex_2stage_double\fP(3) \fBelpa_solve_evp_complex_2stage_single\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -82,5 +82,7 @@ use elpa1
.SH DESCRIPTION
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
The interface \fBelpa_solve_evp_real\fP(3) is a more flexible alternative.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -43,6 +43,8 @@ use elpa2
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "integer, intent(in), optional \fBTHIS_ELPA_REAL_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "logical, intent(in), optional: \fBuseQR\fP: optional argument; switches to QR-decomposition if set to .true."
.RI "logical \fBsuccess\fP: return value indicating success or failure"
......@@ -82,6 +84,8 @@ use elpa2
.br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "int \fBTHIS_ELPA_REAL_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "int \fBuseQR\fP: if set to 1 switch to QR-decomposition"
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
......@@ -89,5 +93,7 @@ use elpa2
.SH DESCRIPTION
Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
The interface \fBelpa_solve_evp_real\fP(3) is a more flexible alternative.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment