Commit f6c67a50 authored by Andreas Marek's avatar Andreas Marek

A bit cleanup of deprecated features

parent 44122b1f
......@@ -17,12 +17,12 @@ following listed interfaces will be removed at some time.
In order to unfiy the namespace of the *ELPA* public interfaces, several interfaces
have been replaced by new names. The old interfaces will be removed
Deprecated interface Replacement
===================================================
get_elpa_row_col_coms elpa_get_communicators
get_elpa_communicators elpa_get_communicators
solve_evp_real elpa_solve_evp_real_1stage_double
solve_evp_complex elpa_solve_evp_complex_1stage_double
Deprecated interface Replacement Comment
================================================================================
get_elpa_row_col_coms elpa_get_communicators (removed)
get_elpa_communicators elpa_get_communicators (removed)
solve_evp_real elpa_solve_evp_real_1stage_double (removed)
solve_evp_complex elpa_solve_evp_complex_1stage_double (removed)
solve_evp_real_1stage elpa_solve_evp_real_1stage_double
solve_evp_complex_1stage elpa_solve_evp_complex_1stage_double
solve_evp_real_2stage elpa_solve_evp_real_2stage_double
......
......@@ -455,9 +455,7 @@ dist_man_MANS = \
if ENABLE_LEGACY
dist_man_MANS += \
man/solve_evp_real.3 \
man/solve_evp_real_1stage_double.3 \
man/solve_evp_complex.3 \
man/solve_evp_complex_1stage_double.3 \
man/solve_evp_real_2stage_double.3 \
man/solve_evp_complex_2stage_double.3 \
......@@ -465,8 +463,7 @@ dist_man_MANS += \
man/elpa_solve_evp_complex_1stage_double.3 \
man/elpa_solve_evp_real_2stage_double.3 \
man/elpa_solve_evp_complex_2stage_double.3 \
man/get_elpa_row_col_comms.3 \
man/get_elpa_communicators.3 \
man/elpa_get_communicators.3 \
man/elpa_mult_at_b_real_double.3 \
man/elpa_mult_at_b_real_single.3 \
man/elpa_mult_ah_b_complex_double.3 \
......
......@@ -125,7 +125,7 @@ of a simple example program can be found in ./test_project/src.
! All ELPA routines need MPI communicators for communicating within
! rows or columns of processes, these are set in get_elpa_communicators
! rows or columns of processes, these are set in elpa_get_communicators
success = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
......@@ -216,8 +216,8 @@ SYNOPSIS
integer, intent(in) ldq: leading dimension of matrix q which stores the eigenvectors
integer, intent(in) nblk: blocksize of block cyclic distributin, must be the same in both directions
integer, intent(in) matrixCols: number of columns of locally distributed matrices a and q
integer, intent(in) mpi_comm_rows: communicator for communication in rows. Constructed with get_elpa_communicators(3)
integer, intent(in) mpi_comm_cols: communicator for communication in colums. Constructed with get_elpa_communicators(3)
integer, intent(in) mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
integer, intent(in) mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
logical success: return value indicating success or failure
......@@ -238,14 +238,14 @@ SYNOPSIS
int ldq: leading dimension of matrix q which stores the eigenvectors
int nblk: blocksize of block cyclic distributin, must be the same in both directions
int matrixCols: number of columns of locally distributed matrices a and q
int mpi_comm_rows: communicator for communication in rows. Constructed with get_elpa_communicators(3)
int mpi_comm_cols: communicator for communication in colums. Constructed with get_elpa_communicators(3)
int mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
int mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
int success: return value indicating success (1) or failure (0)
DESCRIPTION
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the
get_elpa_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
elpa_get_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues
will be stored in q. All memory of the arguments must be allocated outside the call to the solver.
......@@ -265,8 +265,8 @@ DESCRIPTION
integer, intent(in) ldq: leading dimension of matrix q which stores the eigenvectors
integer, intent(in) nblk: blocksize of block cyclic distributin, must be the same in both directions
integer, intent(in) matrixCols: number of columns of locally distributed matrices a and q
integer, intent(in) mpi_comm_rows: communicator for communication in rows. Constructed with get_elpa_communicators(3)
integer, intent(in) mpi_comm_cols: communicator for communication in colums. Constructed with get_elpa_communicators(3)
integer, intent(in) mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
integer, intent(in) mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
logical success: return value indicating success or failure
......@@ -288,14 +288,14 @@ DESCRIPTION
int ldq: leading dimension of matrix q which stores the eigenvectors
int nblk: blocksize of block cyclic distributin, must be the same in both directions
int matrixCols: number of columns of locally distributed matrices a and q
int mpi_comm_rows: communicator for communication in rows. Constructed with get_elpa_communicators(3)
int mpi_comm_cols: communicator for communication in colums. Constructed with get_elpa_communicators(3)
int mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
int mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
int success: return value indicating success (1) or failure (0)
DESCRIPTION
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the
get_elpa_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
elpa_get_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues
will be stored in q. All memory of the arguments must be allocated outside the call to the solver.
......@@ -357,8 +357,8 @@ SYNOPSIS
integer, intent(in) ldq: leading dimension of matrix q which stores the eigenvectors
integer, intent(in) nblk: blocksize of block cyclic distributin, must be the same in both directions
integer, intent(in) matrixCols: number of columns of locally distributed matrices a and q
integer, intent(in) mpi_comm_rows: communicator for communication in rows. Constructed with get_elpa_communicators(3)
integer, intent(in) mpi_comm_cols: communicator for communication in colums. Constructed with get_elpa_communicators(3)
integer, intent(in) mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
integer, intent(in) mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
integer, intent(in) mpi_comm_all: communicator for all processes in the processor set involved in ELPA
logical, intent(in), optional: useQR: optional argument; switches to QR-decomposition if set to .true.
logical, intent(in), optional: useGPU: decide whether GPUs should be used ore not
......@@ -382,8 +382,8 @@ SYNOPSIS
int ldq: leading dimension of matrix q which stores the eigenvectors
int nblk: blocksize of block cyclic distributin, must be the same in both directions
int matrixCols: number of columns of locally distributed matrices a and q
int mpi_comm_rows: communicator for communication in rows. Constructed with get_elpa_communicators(3)
int mpi_comm_cols: communicator for communication in colums. Constructed with get_elpa_communicators(3)
int mpi_comm_rows: communicator for communication in rows. Constructed with elpa_get_communicators(3)
int mpi_comm_cols: communicator for communication in colums. Constructed with elpa_get_communicators(3)
int mpi_comm_all: communicator for all processes in the processor set involved in ELPA
int useQR: if set to 1 switch to QR-decomposition
int useGPU: decide whether the GPU version should be used or not
......@@ -393,7 +393,7 @@ SYNOPSIS
DESCRIPTION
Solve the real eigenvalue problem with the 2-stage solver. The ELPA communicators mpi_comm_rows and mpi_comm_cols are obtained with the
get_elpa_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
elpa_get_communicators(3) function. The distributed quadratic marix a has global dimensions na x na, and a local size lda x matrixCols.
The solver will compute the first nev eigenvalues, which will be stored on exit in ev. The eigenvectors corresponding to the eigenvalues
will be stored in q. All memory of the arguments must be allocated outside the call to the solver.
......
.TH "get_elpa_row_col_comms" 3 "Wed Dec 2 2015" "ELPA" \" -*- nroff -*-
.TH "elpa_get_communicators" 3 "Tue Nov 28 2017" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
get_elpa_row_col_comms \- old, deprecated interface to get the MPI row and column communicators needed in ELPA.
It is recommended to use \fBelpa_get_communicators\fP(3)
elpa_get_communicators
.br
.SH SYNOPSIS
......@@ -12,7 +11,7 @@ It is recommended to use \fBelpa_get_communicators\fP(3)
use elpa1
.br
.RI "success = \fBget_elpa_row_col_comms\fP (mpi_comm_global, my_prow, my_pcol, mpi_comm_rows, mpi_comm_cols)"
.RI "success = \fBelpa_get_communicators\fP (mpi_comm_global, my_prow, my_pcol, mpi_comm_rows, mpi_comm_cols)"
.br
.br
......@@ -53,9 +52,7 @@ use elpa1
.SH DESCRIPTION
Old, depcreated interface, which will be deleted at some point: Please use \fBelpa_get_communicators\fP(3) !
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.br
.SH "SEE ALSO"
......
......@@ -58,9 +58,9 @@ use elpa1
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBwantDebug\fP: give more debugging information"
.br
......
......@@ -58,9 +58,9 @@ use elpa1
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBwantDebug\fP: give more debugging information"
.br
......
......@@ -28,9 +28,9 @@ use elpa1
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "logical, intent(in) \fBwantDebug\fP: if .true. , print more debug information in case of an error"
......
.TH "get_elpa_communicators" 3 "Wed Dec 2 2015" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
get_elpa_communicators \- Old, deprecated interface better use \fBelpa_get_communicators\fP(3)
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.RI "success = \fBget_elpa_communicators\fP (mpi_comm_global, my_prow, my_pcol, mpi_comm_rows, mpi_comm_cols)"
.br
.br
.RI "integer, intent(in) \fBmpi_comm_global\fP: global communicator for the calculation"
.br
.RI "integer, intent(in) \fBmy_prow\fP: row coordinate of the calling process in the process grid"
.br
.RI "integer, intent(in) \fBmy_pcol\fP: column coordinate of the calling process in the process grid"
.br
.RI "integer, intent(out) \fBmpi_comm_row\fP: communicator for communication within rows of processes"
.br
.RI "integer, intent(out) \fBmpi_comm_row\fP: communicator for communication within columns of processes"
.br
.RI "integer \fBsuccess\fP: return value indicating success or failure of the underlying MPI_COMM_SPLIT function"
.SS C INTERFACE
#include "elpa_generated.h"
.br
.RI "success = \fBget_elpa_communicators\fP (int mpi_comm_world, int my_prow, my_pcol, int *mpi_comm_rows, int *Pmpi_comm_cols);"
.br
.br
.RI "int \fBmpi_comm_global\fP: global communicator for the calculation"
.br
.RI "int \fBmy_prow\fP: row coordinate of the calling process in the process grid"
.br
.RI "int \fBmy_pcol\fP: column coordinate of the calling process in the process grid"
.br
.RI "int *\fBmpi_comm_row\fP: pointer to the communicator for communication within rows of processes"
.br
.RI "int *\fBmpi_comm_row\fP: pointer to the communicator for communication within columns of processes"
.br
.RI "int \fBsuccess\fP: return value indicating success or failure of the underlying MPI_COMM_SPLIT function"
.SH DESCRIPTION
Old, depcreated interface, which will be deleted at some point: Please use \fBelpa_get_communicators\fP(3) !
All ELPA routines need MPI communicators for communicating within rows or columns of processes. These communicators are created from the \fBmpi_comm_global\fP communicator. It is assumed that the matrix used in ELPA is distributed with \fBmy_prow\fP rows and \fBmy_pcol\fP columns on the calling process. This function has to be envoked by all involved processes before any other calls to ELPA routines.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBelpa2_print_kernels\fP(1)
.TH "solve_evp_complex" 3 "Thu Mar 17 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
solve_evp_complex \- solve the double-precision complex eigenvalue problem with the 1-stage ELPA solver.
This interface is old and deprecated. It is recommended to use \fBsolve_evp_complex_1stage_double\fP(3)
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBsolve_evp_complex\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, useGPU)"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "complex*16, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all MPI process used in ELPA"
.br
.RI "logical, optional, intent(in) \fBuseGPU\fP: decide whether GPUs should be used or not"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_complex_1stage\fP(3) or \fBelpa_solve_evp_complex\fP(3).
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real_double\fP(3) \fBelpa_solve_evp_real_single\fP(3) \fBelpa_solve_evp_complex_double\fP(3) \fBelpa_solve_evp_complex_single\fP(3) \fBelpa_solve_evp_real_1stage_double\fP(3) \fBelpa_solve_evp_real_1stage_single\fP(3) \fBelpa_solve_evp_complex_1stage_single\fP(3) \fBelpa_solve_evp_real_2stage_double\fP(3) \fBelpa_solve_evp_real_2stage_single\fP(3) \fBelpa_solve_evp_complex_2stage_double\fP(3) \fBelpa_solve_evp_complex_2stage_single\fP(3) \fBelpa2_print_kernels\fP(1)
.TH "solve_evp_real" 3 "Thu Mar 17 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
solve_evp_real \- solve the double-precision real eigenvalue problem with the 1-stage ELPA solver.
This is an old and deprecated interface. It is recommendet to use \fBsolve_evp_real_1stage_double\fP(3)
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBsolve_evp_real\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, useGPU)"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "real*8, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "real*8, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all MPI processes used in ELPA"
.br
.RI "logical, optional, intent(in) \fBuseGPU\fP: decide whether GPUs should be used or not"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SH DESCRIPTION
Old, deprecated interface, which will be deleted at some point. Use \fBsolve_evp_real_1stage\fP(3) or \fBelpa_solve_evp_real\fP(3).
Solve the real eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real_double\fP(3) \fBelpa_solve_evp_real_single\fP(3) \fBelpa_solve_evp_complex_doulbe\fP(3) \fBelpa_solve_evp_complex_single\fP(3) \fBelpa_solve_evp_real_1stage_single\fP(3) \fBelpa_solve_evp_complex_1stage_double\fP(3) \fBelpa_solve_evp_complex_1stage_single\fP(3) \fBelpa_solve_evp_real_2stage_double\fP(3) \fBelpa_solve_evp_real_2stage_single\fP(3) \fBelpa_solve_evp_complex_2stage_double\fP(3) \fBelpa_solve_evp_complex_2stage_single\fP(3) \fBelpa2_print_kernels\fP(1)
......@@ -92,11 +92,8 @@ module elpa1
! The following routines are public:
private
public :: get_elpa_row_col_comms !< old, deprecated interface, will be deleted. Use elpa_get_communicators instead
public :: get_elpa_communicators !< Sets MPI row/col communicators; OLD and deprecated interface, will be deleted. Use elpa_get_communicators instead
public :: elpa_get_communicators !< Sets MPI row/col communicators as needed by ELPA
public :: solve_evp_real !< old, deprecated interface: Driver routine for real double-precision eigenvalue problem DO NOT USE. Will be deleted at some point
public :: elpa_solve_evp_real_1stage_double !< Driver routine for real double-precision 1-stage eigenvalue problem
public :: solve_evp_real_1stage !< Driver routine for real double-precision eigenvalue problem
......@@ -106,7 +103,6 @@ module elpa1
public :: elpa_solve_evp_real_1stage_single !< Driver routine for real single-precision 1-stage eigenvalue problem
#endif
public :: solve_evp_complex !< old, deprecated interface: Driver routine for complex double-precision eigenvalue problem DO NOT USE. Will be deleted at some point
public :: elpa_solve_evp_complex_1stage_double !< Driver routine for complex 1-stage eigenvalue problem
public :: solve_evp_complex_1stage !< Driver routine for complex double-precision eigenvalue problem
public :: solve_evp_complex_1stage_double !< Driver routine for complex double-precision eigenvalue problem
......@@ -159,82 +155,6 @@ module elpa1
logical, public :: elpa_print_times = .false. !< Set elpa_print_times to .true. for explicit timing outputs
!> \brief get_elpa_row_col_comms: old, deprecated interface, will be deleted. Use "elpa_get_communicators"
!> \details
!> The interface and variable definition is the same as in "elpa_get_communicators"
!> \param mpi_comm_global Global communicator for the calculations (in)
!>
!> \param my_prow Row coordinate of the calling process in the process grid (in)
!>
!> \param my_pcol Column coordinate of the calling process in the process grid (in)
!>
!> \param mpi_comm_rows Communicator for communicating within rows of processes (out)
!>
!> \param mpi_comm_cols Communicator for communicating within columns of processes (out)
!> \result mpierr integer error value of mpi_comm_split function
interface get_elpa_row_col_comms
module procedure elpa_get_communicators
end interface
!> \brief elpa_get_communicators: Fortran interface to set the communicators needed by ELPA
!> \details
!> The interface and variable definition is the same as in "elpa_get_communicators"
!> \param mpi_comm_global Global communicator for the calculations (in)
!>
!> \param my_prow Row coordinate of the calling process in the process grid (in)
!>
!> \param my_pcol Column coordinate of the calling process in the process grid (in)
!>
!> \param mpi_comm_rows Communicator for communicating within rows of processes (out)
!>
!> \param mpi_comm_cols Communicator for communicating within columns of processes (out)
!> \result mpierr integer error value of mpi_comm_split function
interface get_elpa_communicators
module procedure elpa_get_communicators
end interface
!> \brief solve_evp_real: old, deprecated Fortran function to solve the real eigenvalue problem with 1-stage solver. Will be deleted at some point. Better use "solve_evp_real_1stage" or "elpa_solve_evp_real"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_real_1stage_double"
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!>
!> \result success
interface solve_evp_real
module procedure elpa_solve_evp_real_1stage_double
end interface
interface solve_evp_real_1stage
module procedure elpa_solve_evp_real_1stage_double
end interface
......@@ -281,45 +201,7 @@ module elpa1
end interface
!> \brief solve_evp_complex: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_complex_1stage_double"
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!>
!> \result success
interface solve_evp_complex
module procedure elpa_solve_evp_complex_1stage_double
end interface
!> \brief solve_evp_complex: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!> \brief solve_evp_complex_1stage: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_complex_1stage_double"
......
......@@ -43,59 +43,6 @@
#include "config-f90.h"
!lc> #include <complex.h>
!lc> /*! \brief C old, deprecated interface, will be deleted. Use "elpa_get_communicators"
!lc> *
!lc> * \param mpi_comm_word MPI global communicator (in)
!lc> * \param my_prow Row coordinate of the calling process in the process grid (in)
!lc> * \param my_pcol Column coordinate of the calling process in the process grid (in)
!lc> * \param mpi_comm_rows Communicator for communicating within rows of processes (out)
!lc> * \result int integer error value of mpi_comm_split function
!lc> */
!lc> int get_elpa_row_col_comms(int mpi_comm_world, int my_prow, int my_pcol, int *mpi_comm_rows, int *mpi_comm_cols);
function get_elpa_row_col_comms_wrapper_c_name1(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols) &
result(mpierr) bind(C,name="get_elpa_row_col_comms")
use, intrinsic :: iso_c_binding
use elpa1, only : elpa_get_communicators
implicit none
integer(kind=c_int) :: mpierr
integer(kind=c_int), value :: mpi_comm_world, my_prow, my_pcol
integer(kind=c_int) :: mpi_comm_rows, mpi_comm_cols
mpierr = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
end function
!lc> #include <complex.h>
!lc> /*! \brief C old, deprecated interface, will be deleted. Use "elpa_get_communicators"
!lc> *
!lc> * \param mpi_comm_word MPI global communicator (in)
!lc> * \param my_prow Row coordinate of the calling process in the process grid (in)
!lc> * \param my_pcol Column coordinate of the calling process in the process grid (in)
!lc> * \param mpi_comm_rows Communicator for communicating within rows of processes (out)
!lc> * \result int integer error value of mpi_comm_split function
!lc> */
!lc> int get_elpa_communicators(int mpi_comm_world, int my_prow, int my_pcol, int *mpi_comm_rows, int *mpi_comm_cols);
function get_elpa_row_col_comms_wrapper_c_name2(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols) &
result(mpierr) bind(C,name="get_elpa_communicators")
use, intrinsic :: iso_c_binding
use elpa1, only : elpa_get_communicators
implicit none
integer(kind=c_int) :: mpierr
integer(kind=c_int), value :: mpi_comm_world, my_prow, my_pcol
integer(kind=c_int) :: mpi_comm_rows, mpi_comm_cols
mpierr = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
end function
!lc> #include <complex.h>
!lc> /*! \brief C interface to create ELPA communicators
!lc> *
!lc> * \param mpi_comm_word MPI global communicator (in)
......
......@@ -60,7 +60,6 @@ module elpa_driver
implicit none
public :: elpa_solve_evp_real, elpa_solve_evp_complex
public :: elpa_solve_evp_real_double, elpa_solve_evp_complex_double
#ifdef WANT_SINGLE_PRECISION_REAL
public :: elpa_solve_evp_real_single
......@@ -69,103 +68,6 @@ module elpa_driver
public :: elpa_solve_evp_complex_single
#endif
!-------------------------------------------------------------------------------
!> \brief solve_evp_real: Old, deprecated interface. Better use solve_evp_real_double
!>
!> Parameters
!>
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols local columns of matrix a and q
!>
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param mpi_comm_all MPI communicator for the total processor set
!>
!> \param THIS_REAL_ELPA_KERNEL_API (optional) specify used ELPA 2stage
!> kernel via API (only evalulated if 2 stage solver is used_
!>
!> \param use_qr (optional) use QR decomposition in the ELPA 2stage solver
!> \param useGPU (optional) use GPU version of ELPA 1stage
!>
!> \param method choose whether to use ELPA 1stage or 2stage solver
!> possible values: "1stage" => use ELPA 1stage solver
!> "2stage" => use ELPA 2stage solver
!> "auto" => (at the moment) use ELPA 2stage solver
!>
!> \result success logical, false if error occured
!-------------------------------------------------------------------------------
interface elpa_solve_evp_real
module procedure elpa_solve_evp_real_double
end interface
!-------------------------------------------------------------------------------
!> \brief solve_evp_complex: Old, deprecated interface, better use solve_evp_complex_double
!>
!> Parameters
!>
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols local columns of matrix a and q
!>
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param mpi_comm_all MPI communicator for the total processor set
!>
!> \param THIS_REAL_COMPLEX_KERNEL_API (optional) specify used ELPA 2stage
!> kernel via API (only evalulated if 2 stage solver is used
!> \param useGPU (optional) use GPU version of ELPA 1stage
!>
!> \param method choose whether to use ELPA 1stage or 2stage solver
!> possible values: "1stage" => use ELPA 1stage solver
!> "2stage" => use ELPA 2stage solver
!> "auto" => (at the moment) use ELPA 2stage solver
!>
!> \result success logical, false if error occured
!-------------------------------------------------------------------------------
interface elpa_solve_evp_complex
module procedure elpa_solve_evp_complex_double
end interface
contains
!-------------------------------------------------------------------------------
......
......@@ -181,9 +181,9 @@ program test_complex_gpu_version_double_precision
end if
! All ELPA routines need MPI communicators for communicating within
! rows or columns of processes, these are set in get_elpa_communicators.
! rows or columns of processes, these are set in elpa_get_communicators.
mpierr = get_elpa_communicators(mpi_comm_world, my_prow, my_pcol, &
mpierr = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
if (myid==0) then
......
......@@ -178,9 +178,9 @@ program test_real_gpu_version_double_precision
end if
! All ELPA routines need MPI communicators for communicating within
! rows or columns of processes, these are set in get_elpa_communicators.
! rows or columns of processes, these are set in elpa_get_communicators.
mpierr = get_elpa_communicators(mpi_comm_world, my_prow, my_pcol, &
mpierr = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
if (myid==0) then
......
......@@ -181,9 +181,9 @@ program test_complex_gpu_version_single_precision
end if
! All ELPA routines need MPI communicators for communicating within
! rows or columns of processes, these are set in get_elpa_communicators.
! rows or columns of processes, these are set in elpa_get_communicators.
mpierr = get_elpa_communicators(mpi_comm_world, my_prow, my_pcol, &
mpierr = elpa_get_communicators(mpi_comm_world, my_prow, my_pcol, &
mpi_comm_rows, mpi_comm_cols)
if (myid==0) then
......
......@@ -178,9 +178,9 @@ program test_real_gpu_version_single_precision
end if