Add interface to unify C and Fortran names

This commit does not change the interfaces defined in ELPA_2015.11.001 !
All functionality is available via the interface names and definitions
as in ELPA_2015.11.001

But some new interfaces have been added, in order to unfiy the
references from C and Fortran codes:

- The procedures to create the ELPA (row/column) communicators are now
  available from C _and_ Fortran with the name "get_elpa_communicators".
  The old Fortran name "get_elpa_row_col_comms" and the old C name
  "elpa_get_communicators" are from now on deprecated but still available

- The 1-stage solver routines are available from C _and_ Fortran via
  the names "solve_evp_real_1stage" and "solve_evp_complex_1stage".
  The old Fortran names "solve_evp_real" and "solve_evp_complex" are
  from now on deprecated but still functional.

All documentation (man pages, doxygen, and example test programs) have
been changed accordingly.

This commit implies a change in the API versioning number, but no
changes to codes calling ELPA (if they have been already updated to the
API of ELPA_2015.11.001)
parent b1df09cd
...@@ -95,10 +95,13 @@ nobase_elpa_include_HEADERS = $(wildcard modules/*) ...@@ -95,10 +95,13 @@ nobase_elpa_include_HEADERS = $(wildcard modules/*)
nobase_elpa_include_HEADERS += elpa/elpa.h elpa/elpa_kernel_constants.h elpa/elpa_generated.h nobase_elpa_include_HEADERS += elpa/elpa.h elpa/elpa_kernel_constants.h elpa/elpa_generated.h
man_MANS = man/solve_evp_real.3 \ man_MANS = man/solve_evp_real.3 \
man/solve_evp_real_1stage.3 \
man/solve_evp_complex.3 \ man/solve_evp_complex.3 \
man/solve_evp_complex_1stage.3 \
man/solve_evp_real_2stage.3 \ man/solve_evp_real_2stage.3 \
man/solve_evp_complex_2stage.3 \ man/solve_evp_complex_2stage.3 \
man/get_elpa_row_col_comms.3 \ man/get_elpa_row_col_comms.3 \
man/get_elpa_communicators.3 \
man/print_available_elpa2_kernels.1 man/print_available_elpa2_kernels.1
# other files to distribute # other files to distribute
...@@ -342,6 +345,13 @@ EXTRA_DIST = \ ...@@ -342,6 +345,13 @@ EXTRA_DIST = \
src/elpa_transpose_vectors.X90 \ src/elpa_transpose_vectors.X90 \
src/redist_band.X90 src/redist_band.X90
# Rules to re-generated the headers
elpa/elpa_generated.h: $(top_srcdir)/src/elpa_c_interface.F90
grep -h "^ *!c>" $^ | sed 's/^ *!c>//;' > $@ || { rm $@; exit 1; }
test/shared_sources/generated.h: $(wildcard $(top_srcdir)/test/shared_sources/*.F90)
grep -h "^ *!c>" $^ | sed 's/^ *!c>//;' > $@ || { rm $@; exit 1; }
LIBTOOL_DEPS = @LIBTOOL_DEPS@ LIBTOOL_DEPS = @LIBTOOL_DEPS@
libtool: $(LIBTOOL_DEPS) libtool: $(LIBTOOL_DEPS)
$(SHELL) ./config.status libtool $(SHELL) ./config.status libtool
......
...@@ -34,7 +34,7 @@ rm -rf config.h config-f90.h ...@@ -34,7 +34,7 @@ rm -rf config.h config-f90.h
# by the current interface, as they are ABI compatible (e.g. only new symbols # by the current interface, as they are ABI compatible (e.g. only new symbols
# were added by the new interface) # were added by the new interface)
# #
AC_SUBST([ELPA_SO_VERSION], [4:1:0]) AC_SUBST([ELPA_SO_VERSION], [5:0:1])
# #
...@@ -725,7 +725,11 @@ fi ...@@ -725,7 +725,11 @@ fi
echo "Generating elpa/elpa_generated.h..." echo "Generating elpa/elpa_generated.h..."
mkdir -p elpa mkdir -p elpa
grep "^ *!c>" $srcdir/src/elpa_c_interface.F90 | sed 's/^ *!c>//;' > elpa/elpa_generated.h || exit 1 grep -h "^ *!c>" $srcdir/src/elpa_c_interface.F90 | sed 's/^ *!c>//;' > elpa/elpa_generated.h || exit 1
echo "Generating test/shared_sources/generated.h..."
mkdir -p test/shared_sources
grep -h "^ *!c>" $srcdir/test/shared_sources/*.F90 | sed 's/^ *!c>//;' > test/shared_sources/generated.h || exit 1
if test "${can_compile_avx}" = "no" ; then if test "${can_compile_avx}" = "no" ; then
if test x"${want_avx}" = x"yes" ; then if test x"${want_avx}" = x"yes" ; then
......
.TH "get_elpa_communicators" 3 "Wed Dec 2 2015" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
get_elpa_communicators \- get the MPI row and column communicators needed in ELPA
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.RI "success = \fBget_elpa_communicators\fP (mpi_comm_global, my_prow, my_pcol, mpi_comm_rows, mpi_comm_cols)"
.br
.br
.RI "integer, intent(in) \fBmpi_comm_global\fP: global communicator for the calculation"
.br
.RI "integer, intent(in) \fBmy_prow\fP: row coordinate of the calling process in the process grid"
.br
.RI "integer, intent(in) \fBmy_pcol\fP: column coordinate of the calling process in the process grid"
.br
.RI "integer, intent(out) \fBmpi_comm_row\fP: communicator for communication within rows of processes"
.br
.RI "integer, intent(out) \fBmpi_comm_row\fP: communicator for communication within columns of processes"
.br
.RI "integer \fBsuccess\fP: return value indicating success or failure of the underlying MPI_COMM_SPLIT function"
.SS C INTERFACE
#include "elpa_generated.h"
.br
.RI "success = \fBget_elpa_communicators\fP (int mpi_comm_world, int my_prow, my_pcol, int *mpi_comm_rows, int *Pmpi_comm_cols);"
.br
.br
.RI "int \fBmpi_comm_global\fP: global communicator for the calculation"
.br
.RI "int \fBmy_prow\fP: row coordinate of the calling process in the process grid"
.br
.RI "int \fBmy_pcol\fP: column coordinate of the calling process in the process grid"
.br
.RI "int *\fBmpi_comm_row\fP: pointer to the communicator for communication within rows of processes"
.br
.RI "int *\fBmpi_comm_row\fP: pointer to the communicator for communication within columns of processes"
.br
.RI "int \fBsuccess\fP: return value indicating success or failure of the underlying MPI_COMM_SPLIT function"
.SH DESCRIPTION
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.br
.SH "SEE ALSO"
\fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
...@@ -2,7 +2,8 @@ ...@@ -2,7 +2,8 @@
.ad l .ad l
.nh .nh
.SH NAME .SH NAME
get_elpa_row_col_comms \- get the MPI row and column communicators needed in ELPA get_elpa_row_col_comms \- old, deprecated interface to get the MPI row and column communicators needed in ELPA.
It is recommended to use \fBget_elpa_communicators\fP(3)
.br .br
.SH SYNOPSIS .SH SYNOPSIS
...@@ -52,8 +53,9 @@ use elpa1 ...@@ -52,8 +53,9 @@ use elpa1
.SH DESCRIPTION .SH DESCRIPTION
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines. All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.br .br
.SH "SEE ALSO" .SH "SEE ALSO"
\fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1) \fBget_elpa_communicators\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
...@@ -23,5 +23,5 @@ A. Marek, MPCDF ...@@ -23,5 +23,5 @@ A. Marek, MPCDF
.SH "Reporting bugs" .SH "Reporting bugs"
Report bugs to the ELPA mail elpa-library@mpcdf.mpg.de Report bugs to the ELPA mail elpa-library@mpcdf.mpg.de
.SH "SEE ALSO" .SH "SEE ALSO"
\fBget_elpa_row_col_comms\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBget_elpa_communicators\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3)
...@@ -2,7 +2,8 @@ ...@@ -2,7 +2,8 @@
.ad l .ad l
.nh .nh
.SH NAME .SH NAME
solve_evp_complex \- solve the complex eigenvalue problem with the 1-stage ELPA solver solve_evp_complex \- solve the complex eigenvalue problem with the 1-stage ELPA solver.
This interface is old and deprecated. It is recommended to use \fBsolve_evp_complex_1stage\fP(3)
.br .br
.SH SYNOPSIS .SH SYNOPSIS
...@@ -36,53 +37,15 @@ use elpa1 ...@@ -36,53 +37,15 @@ use elpa1
.br .br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP" .RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br .br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "logical \fBsuccess\fP: return value indicating success or failure" .RI "logical \fBsuccess\fP: return value indicating success or failure"
.br .br
.SS C INTERFACE
#include "elpa.h"
.br
#include <complex.h>
.br
.RI "success = \fBsolve_evp_complex_stage1\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex*\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double complex *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double complex *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION .SH DESCRIPTION
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_row_col_comms\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver. Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br .br
.SH "SEE ALSO" .SH "SEE ALSO"
\fBget_elpa_row_col_comms\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1) \fBget_elpa_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
.TH "solve_evp_complex_1stage" 3 "Wed Dec 2 2015" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
solve_evp_complex_1stage \- solve the complex eigenvalue problem with the 1-stage ELPA solver
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBsolve_evp_complex_1stage\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols)"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "complex*16, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
#include <complex.h>
.br
.RI "success = \fBsolve_evp_complex_1stage\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex*\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double complex *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double complex *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBget_elpa_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
...@@ -37,9 +37,9 @@ use elpa2 ...@@ -37,9 +37,9 @@ use elpa2
.br .br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP" .RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br .br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA" .RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br .br
...@@ -51,7 +51,7 @@ use elpa2 ...@@ -51,7 +51,7 @@ use elpa2
#include <complex.h> #include <complex.h>
.br .br
.RI "success = \fBsolve_evp_complex_stage2\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_REAL_KERNEL);" .RI "success = \fBsolve_evp_complex_2stage\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_REAL_KERNEL);"
.br .br
.RI " " .RI " "
.br .br
...@@ -76,16 +76,16 @@ use elpa2 ...@@ -76,16 +76,16 @@ use elpa2
.br .br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP" .RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br .br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA" .RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br .br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0) .RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION .SH DESCRIPTION
Solve the complex eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_row_col_comms\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver. Solve the complex eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br .br
.SH "SEE ALSO" .SH "SEE ALSO"
\fBget_elpa_row_col_comms\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1) \fBget_elpa_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
...@@ -2,7 +2,8 @@ ...@@ -2,7 +2,8 @@
.ad l .ad l
.nh .nh
.SH NAME .SH NAME
solve_evp_real \- solve the real eigenvalue problem with the 1-stage ELPA solver solve_evp_real \- solve the real eigenvalue problem with the 1-stage ELPA solver.
This is an old and deprecated interface. It is recommendet to use \fBsolve_evp_real_1stage\fP(3)
.br .br
.SH SYNOPSIS .SH SYNOPSIS
...@@ -36,51 +37,15 @@ use elpa1 ...@@ -36,51 +37,15 @@ use elpa1
.br .br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP" .RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br .br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "logical \fBsuccess\fP: return value indicating success or failure" .RI "logical \fBsuccess\fP: return value indicating success or failure"
.br .br
.SS C INTERFACE
#include "elpa.h"
.br
.RI "success = \fBsolve_evp_real_stage1\fP (\fBint\fP na, \fBint\fP nev, \fB double *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION .SH DESCRIPTION
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_row_col_comms\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver. Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br .br
.SH "SEE ALSO" .SH "SEE ALSO"
\fBget_elpa_row_col_comms\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1) \fBget_elpa_communicators\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
.TH "solve_evp_real_1stage" 3 "Wed Dec 2 2015" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
solve_evp_real_1stage \- solve the real eigenvalue problem with the 1-stage ELPA solver
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBsolve_evp_real_1stage\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols)"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "real*8, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "real*8, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
.RI "success = \fBsolve_evp_real_1stage\fP (\fBint\fP na, \fBint\fP nev, \fB double *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBget_elpa_communicators\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_real_2stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
...@@ -37,9 +37,9 @@ use elpa2 ...@@ -37,9 +37,9 @@ use elpa2
.br .br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP" .RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br .br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA" .RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br .br
...@@ -51,7 +51,7 @@ use elpa2 ...@@ -51,7 +51,7 @@ use elpa2
#include "elpa.h" #include "elpa.h"
.br .br
.RI "success = \fBsolve_evp_real_stage2\fP (\fBint\fP na, \fBint\fP nev, \fB double *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_REAL_KERNEL, \fBint\fP useQr);" .RI "success = \fBsolve_evp_real_2stage\fP (\fBint\fP na, \fBint\fP nev, \fB double *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_REAL_KERNEL, \fBint\fP useQr);"
.br .br
.RI " " .RI " "
.br .br
...@@ -76,9 +76,9 @@ use elpa2 ...@@ -76,9 +76,9 @@ use elpa2
.br .br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP" .RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br .br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_row_col_comms\fP(3)" .RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br .br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA" .RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br .br
...@@ -87,7 +87,7 @@ use elpa2 ...@@ -87,7 +87,7 @@ use elpa2
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0) .RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION .SH DESCRIPTION
Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_row_col_comms\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver. Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBget_elpa_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br .br
.SH "SEE ALSO" .SH "SEE ALSO"
\fBget_elpa_row_col_comms\fP(3) \fBsolve_evp_real\fP(3) \fBsolve_evp_complex\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1) \fBget_elpa_communicators\fP(3) \fBsolve_evp_real_1stage\fP(3) \fBsolve_evp_complex_1stage\fP(3) \fBsolve_evp_complex_2stage\fP(3) \fBprint_available_elpa2_kernels\fP(1)
...@@ -94,10 +94,13 @@ module ELPA1 ...@@ -94,10 +94,13 @@ module ELPA1
! The following routines are public: ! The following routines are public:
public :: get_elpa_row_col_comms !< Sets MPI row/col communicators public :: get_elpa_row_col_comms !< old, deprecated interface: Sets MPI row/col communicators
public :: get_elpa_communicators !< Sets MPI row/col communicators
public :: solve_evp_real !< Driver routine for real eigenvalue problem public :: solve_evp_real !< old, deprecated interface: Driver routine for real eigenvalue problem
public :: solve_evp_complex !< Driver routine for complex eigenvalue problem public :: solve_evp_real_1stage !< Driver routine for real eigenvalue problem
public :: solve_evp_complex !< old, deprecated interface: Driver routine for complex eigenvalue problem
public :: solve_evp_complex_1stage !< Driver routine for complex eigenvalue problem
! Timing results, set by every call to solve_evp_xxx ! Timing results, set by every call to solve_evp_xxx
...@@ -109,12 +112,110 @@ module ELPA1 ...@@ -109,12 +112,110 @@ module ELPA1
include 'mpif.h' include 'mpif.h'
!> \brief get_elpa_row_col_comms: old, deprecated Fortran function to create the MPI communicators for ELPA. Better use "elpa_get_communicators"
!> \detail
!> The interface and variable definition is the same as in "elpa_get_communicators"
!> \param mpi_comm_global Global communicator for the calculations (in)
!>
!> \param my_prow Row coordinate of the calling process in the process grid (in)
!>
!> \param my_pcol Column coordinate of the calling process in the process grid (in)
!>