Commit de0fabbc authored by Andreas Marek's avatar Andreas Marek
Browse files

Update doxygen documentation for ELPA 1stage

parent b67a04fd
...@@ -134,43 +134,6 @@ module elpa1_impl ...@@ -134,43 +134,6 @@ module elpa1_impl
public :: elpa_cholesky_complex_single_impl !< Cholesky factorization of a single-precision complex matrix public :: elpa_cholesky_complex_single_impl !< Cholesky factorization of a single-precision complex matrix
#endif #endif
! Timing results, set by every call to solve_evp_xxx
!> \brief elpa_solve_evp_real_1stage_double_impl: Fortran function to solve the real eigenvalue problem with 1-stage solver. This is called by "elpa_solve_evp_real"
!>
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!>
!> \result success
contains contains
!------------------------------------------------------------------------------- !-------------------------------------------------------------------------------
...@@ -217,40 +180,33 @@ end function elpa_get_communicators_impl ...@@ -217,40 +180,33 @@ end function elpa_get_communicators_impl
!> \brief elpa_solve_evp_real_1stage_double_impl: Fortran function to solve the real double-precision eigenvalue problem with 1-stage solver !> \brief elpa_solve_evp_real_1stage_double_impl: Fortran function to solve the real double-precision eigenvalue problem with 1-stage solver
!> !>
! Parameters !> \details
! !> \param obj elpa_t object contains:
!> \param na Order of matrix a !> \param - obj%na Order of matrix
!> !> \param - obj%nev number of eigenvalues/vectors to be computed
!> \param nev Number of eigenvalues needed. !> The smallest nev eigenvalues/eigenvectors are calculated.
!> The smallest nev eigenvalues/eigenvectors are calculated. !> \param - obj%local_nrows Leading dimension of a
!> !> \param - obj%local_ncols local columns of matrix q
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed. !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> Distribution is like in Scalapack. !> \param - obj%mpi_comm_rows MPI communicator for rows
!> The full matrix must be set (not only one half like in scalapack). !> \param - obj%mpi_comm_cols MPI communicator for columns
!> Destroyed on exit (upper and lower half). !> \param - obj%mpi_comm_parent MPI communicator for columns
!> !> \param - obj%gpu use GPU version (1 or 0)
!> \param lda Leading dimension of a !>
!> !> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set !> Distribution is like in Scalapack.
!> !> The full matrix must be set (not only one half like in scalapack).
!> \param q(ldq,matrixCols) On output: Eigenvectors of a !> Destroyed on exit (upper and lower half).
!> Distribution is like in Scalapack. !>
!> Must be always dimensioned to the full size (corresponding to (na,na)) !> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!> even if only a part of the eigenvalues is needed. !>
!> !> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> \param ldq Leading dimension of q !> Distribution is like in Scalapack.
!> !> Must be always dimensioned to the full size (corresponding to (na,na))
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> even if only a part of the eigenvalues is needed.
!> !>
!> \param matrixCols distributed number of matrix columns !>
!> !> \result success
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI communicator
!> \param useGPU use GPU version (.true. or .false.)
!>
!> \result success
#define REALCASE 1 #define REALCASE 1
#define DOUBLE_PRECISION 1 #define DOUBLE_PRECISION 1
#include "../general/precision_macros.h" #include "../general/precision_macros.h"
...@@ -260,40 +216,33 @@ end function elpa_get_communicators_impl ...@@ -260,40 +216,33 @@ end function elpa_get_communicators_impl
#ifdef WANT_SINGLE_PRECISION_REAL #ifdef WANT_SINGLE_PRECISION_REAL
!> \brief elpa_solve_evp_real_1stage_single_impl: Fortran function to solve the real single-precision eigenvalue problem with 1-stage solver !> \brief elpa_solve_evp_real_1stage_single_impl: Fortran function to solve the real single-precision eigenvalue problem with 1-stage solver
!> !> \details
! Parameters !> \param obj elpa_t object contains:
! !> \param - obj%na Order of matrix
!> \param na Order of matrix a !> \param - obj%nev number of eigenvalues/vectors to be computed
!> !> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param nev Number of eigenvalues needed. !> \param - obj%local_nrows Leading dimension of a
!> The smallest nev eigenvalues/eigenvectors are calculated. !> \param - obj%local_ncols local columns of matrix q
!> !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed. !> \param - obj%mpi_comm_rows MPI communicator for rows
!> Distribution is like in Scalapack. !> \param - obj%mpi_comm_cols MPI communicator for columns
!> The full matrix must be set (not only one half like in scalapack). !> \param - obj%mpi_comm_parent MPI communicator for columns
!> Destroyed on exit (upper and lower half). !> \param - obj%gpu use GPU version (1 or 0)
!> !>
!> \param lda Leading dimension of a !> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> !> Distribution is like in Scalapack.
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set !> The full matrix must be set (not only one half like in scalapack).
!> !> Destroyed on exit (upper and lower half).
!> \param q(ldq,matrixCols) On output: Eigenvectors of a !>
!> Distribution is like in Scalapack. !> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!> Must be always dimensioned to the full size (corresponding to (na,na)) !>
!> even if only a part of the eigenvalues is needed. !> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> !> Distribution is like in Scalapack.
!> \param ldq Leading dimension of q !> Must be always dimensioned to the full size (corresponding to (na,na))
!> !> even if only a part of the eigenvalues is needed.
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !>
!> !>
!> \param matrixCols distributed number of matrix columns !> \result success
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI commuicator
!> \param useGPU
!>
!> \result success
#define REALCASE 1 #define REALCASE 1
#define SINGLE_PRECISION 1 #define SINGLE_PRECISION 1
...@@ -304,40 +253,33 @@ end function elpa_get_communicators_impl ...@@ -304,40 +253,33 @@ end function elpa_get_communicators_impl
#endif /* WANT_SINGLE_PRECISION_REAL */ #endif /* WANT_SINGLE_PRECISION_REAL */
!> \brief elpa_solve_evp_complex_1stage_double_impl: Fortran function to solve the complex double-precision eigenvalue problem with 1-stage solver !> \brief elpa_solve_evp_complex_1stage_double_impl: Fortran function to solve the complex double-precision eigenvalue problem with 1-stage solver
!> !> \details
! Parameters !> \param obj elpa_t object contains:
! !> \param - obj%na Order of matrix
!> \param na Order of matrix a !> \param - obj%nev number of eigenvalues/vectors to be computed
!> !> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param nev Number of eigenvalues needed. !> \param - obj%local_nrows Leading dimension of a
!> The smallest nev eigenvalues/eigenvectors are calculated. !> \param - obj%local_ncols local columns of matrix q
!> !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed. !> \param - obj%mpi_comm_rows MPI communicator for rows
!> Distribution is like in Scalapack. !> \param - obj%mpi_comm_cols MPI communicator for columns
!> The full matrix must be set (not only one half like in scalapack). !> \param - obj%mpi_comm_parent MPI communicator for columns
!> Destroyed on exit (upper and lower half). !> \param - obj%gpu use GPU version (1 or 0)
!> !>
!> \param lda Leading dimension of a !> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> !> Distribution is like in Scalapack.
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set !> The full matrix must be set (not only one half like in scalapack).
!> !> Destroyed on exit (upper and lower half).
!> \param q(ldq,matrixCols) On output: Eigenvectors of a !>
!> Distribution is like in Scalapack. !> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!> Must be always dimensioned to the full size (corresponding to (na,na)) !>
!> even if only a part of the eigenvalues is needed. !> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> !> Distribution is like in Scalapack.
!> \param ldq Leading dimension of q !> Must be always dimensioned to the full size (corresponding to (na,na))
!> !> even if only a part of the eigenvalues is needed.
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !>
!> !>
!> \param matrixCols distributed number of matrix columns !> \result success
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI Communicator
!> \param useGPU use GPU version (.true. or .false.)
!>
!> \result success
#define COMPLEXCASE 1 #define COMPLEXCASE 1
#define DOUBLE_PRECISION 1 #define DOUBLE_PRECISION 1
#include "../general/precision_macros.h" #include "../general/precision_macros.h"
...@@ -349,40 +291,33 @@ end function elpa_get_communicators_impl ...@@ -349,40 +291,33 @@ end function elpa_get_communicators_impl
#ifdef WANT_SINGLE_PRECISION_COMPLEX #ifdef WANT_SINGLE_PRECISION_COMPLEX
!> \brief elpa_solve_evp_complex_1stage_single_impl: Fortran function to solve the complex single-precision eigenvalue problem with 1-stage solver !> \brief elpa_solve_evp_complex_1stage_single_impl: Fortran function to solve the complex single-precision eigenvalue problem with 1-stage solver
!> !> \details
! Parameters !> \param obj elpa_t object contains:
! !> \param - obj%na Order of matrix
!> \param na Order of matrix a !> \param - obj%nev number of eigenvalues/vectors to be computed
!> !> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param nev Number of eigenvalues needed. !> \param - obj%local_nrows Leading dimension of a
!> The smallest nev eigenvalues/eigenvectors are calculated. !> \param - obj%local_ncols local columns of matrix q
!> !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed. !> \param - obj%mpi_comm_rows MPI communicator for rows
!> Distribution is like in Scalapack. !> \param - obj%mpi_comm_cols MPI communicator for columns
!> The full matrix must be set (not only one half like in scalapack). !> \param - obj%mpi_comm_parent MPI communicator for columns
!> Destroyed on exit (upper and lower half). !> \param - obj%gpu use GPU version (1 or 0)
!> !>
!> \param lda Leading dimension of a !> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> !> Distribution is like in Scalapack.
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set !> The full matrix must be set (not only one half like in scalapack).
!> !> Destroyed on exit (upper and lower half).
!> \param q(ldq,matrixCols) On output: Eigenvectors of a !>
!> Distribution is like in Scalapack. !> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!> Must be always dimensioned to the full size (corresponding to (na,na)) !>
!> even if only a part of the eigenvalues is needed. !> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> !> Distribution is like in Scalapack.
!> \param ldq Leading dimension of q !> Must be always dimensioned to the full size (corresponding to (na,na))
!> !> even if only a part of the eigenvalues is needed.
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !>
!> !>
!> \param matrixCols distributed number of matrix columns !> \result success
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI communicator
!> \param useGPU
!>
!> \result success
#define COMPLEXCASE 1 #define COMPLEXCASE 1
#define SINGLE_PRECISION #define SINGLE_PRECISION
......
...@@ -118,18 +118,19 @@ module elpa1_auxiliary_impl ...@@ -118,18 +118,19 @@ module elpa1_auxiliary_impl
#include "../general/precision_macros.h" #include "../general/precision_macros.h"
!> \brief elpa_invert_trm_real_double: Inverts a double-precision real upper triangular matrix !> \brief elpa_invert_trm_real_double: Inverts a double-precision real upper triangular matrix
!> \details !> \details
!> \param na Order of matrix !> \param obj elpa_t object contains:
!> \param a(lda,matrixCols) Distributed matrix which should be inverted !> \param - obj%na Order of matrix
!> Distribution is like in Scalapack. !> \param - obj%local_nrows Leading dimension of a
!> Only upper triangle needs to be set. !> \param - obj%local_ncols local columns of matrix a
!> The lower triangle is not referenced. !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param lda Leading dimension of a !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param matrixCols local columns of matrix a !> \param - obj%wantDebug logical, more debug information on failure
!> \param mpi_comm_rows MPI communicator for rows !> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> \param mpi_comm_cols MPI communicator for columns !> Distribution is like in Scalapack.
!> \param wantDebug logical, more debug information on failure !> Only upper triangle needs to be set.
!> \result succes logical, reports success or failure !> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_real_double_impl(obj, a) result(success) function elpa_invert_trm_real_double_impl(obj, a) result(success)
#include "elpa_invert_trm.X90" #include "elpa_invert_trm.X90"
end function elpa_invert_trm_real_double_impl end function elpa_invert_trm_real_double_impl
...@@ -143,18 +144,20 @@ module elpa1_auxiliary_impl ...@@ -143,18 +144,20 @@ module elpa1_auxiliary_impl
!> \brief elpa_invert_trm_real_single_impl: Inverts a single-precision real upper triangular matrix !> \brief elpa_invert_trm_real_single_impl: Inverts a single-precision real upper triangular matrix
!> \details !> \details
!> \param na Order of matrix !> \param obj elpa_t object contains:
!> \param a(lda,matrixCols) Distributed matrix which should be inverted !> \param - obj%na Order of matrix
!> Distribution is like in Scalapack. !> \param - obj%local_nrows Leading dimension of a
!> Only upper triangle needs to be set. !> \param - obj%local_ncols local columns of matrix a
!> The lower triangle is not referenced. !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param lda Leading dimension of a !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param matrixCols local columns of matrix a !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param - obj%wantDebug logical, more debug information on failure
!> \param mpi_comm_rows MPI communicator for rows !> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> \param mpi_comm_cols MPI communicator for columns !> Distribution is like in Scalapack.
!> \param wantDebug logical, more debug information on failure !> Only upper triangle needs to be set.
!> \result succes logical, reports success or failure !> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_real_single_impl(obj, a) result(success) function elpa_invert_trm_real_single_impl(obj, a) result(success)
#include "elpa_invert_trm.X90" #include "elpa_invert_trm.X90"
end function elpa_invert_trm_real_single_impl end function elpa_invert_trm_real_single_impl
...@@ -170,19 +173,19 @@ module elpa1_auxiliary_impl ...@@ -170,19 +173,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_cholesky_complex_double_impl: Cholesky factorization of a double-precision complex hermitian matrix !> \brief elpa_cholesky_complex_double_impl: Cholesky factorization of a double-precision complex hermitian matrix
!> \details !> \details
!> \param na Order of matrix !> \param obj elpa_t object contains:
!> \param a(lda,matrixCols) Distributed matrix which should be factorized. !> \param - obj%na Order of matrix
!> Distribution is like in Scalapack. !> \param - obj%local_nrows Leading dimension of a
!> Only upper triangle needs to be set. !> \param - obj%local_ncols local columns of matrix a
!> On return, the upper triangle contains the Cholesky factor !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> and the lower triangle is set to 0. !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param lda Leading dimension of a !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param matrixCols local columns of matrix a !> \param - obj%wantDebug logical, more debug information on failure
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> \param mpi_comm_rows MPI communicator for rows !> Distribution is like in Scalapack.
!> \param mpi_comm_cols MPI communicator for columns !> Only upper triangle needs to be set.
!> \param wantDebug logical, more debug information on failure !> The lower triangle is not referenced.
!> \result succes logical, reports success or failure !> \result succes logical, reports success or failure
function elpa_cholesky_complex_double_impl(obj, a) result(success) function elpa_cholesky_complex_double_impl(obj, a) result(success)
#include "elpa_cholesky_template.X90" #include "elpa_cholesky_template.X90"
...@@ -198,19 +201,19 @@ module elpa1_auxiliary_impl ...@@ -198,19 +201,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_cholesky_complex_single_impl: Cholesky factorization of a single-precision complex hermitian matrix !> \brief elpa_cholesky_complex_single_impl: Cholesky factorization of a single-precision complex hermitian matrix
!> \details !> \details
!> \param na Order of matrix !> \param obj elpa_t object contains:
!> \param a(lda,matrixCols) Distributed matrix which should be factorized. !> \param - obj%na Order of matrix
!> Distribution is like in Scalapack. !> \param - obj%local_nrows Leading dimension of a
!> Only upper triangle needs to be set. !> \param - obj%local_ncols local columns of matrix a
!> On return, the upper triangle contains the Cholesky factor !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> and the lower triangle is set to 0. !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param lda Leading dimension of a !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param matrixCols local columns of matrix a !> \param - obj%wantDebug logical, more debug information on failure
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> \param mpi_comm_rows MPI communicator for rows !> Distribution is like in Scalapack.
!> \param mpi_comm_cols MPI communicator for columns !> Only upper triangle needs to be set.
!> \param wantDebug logical, more debug information on failure !> The lower triangle is not referenced.
!> \result succes logical, reports success or failure !> \result succes logical, reports success or failure
function elpa_cholesky_complex_single_impl(obj, a) result(success) function elpa_cholesky_complex_single_impl(obj, a) result(success)
#include "elpa_cholesky_template.X90" #include "elpa_cholesky_template.X90"
...@@ -227,19 +230,19 @@ module elpa1_auxiliary_impl ...@@ -227,19 +230,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_invert_trm_complex_double_impl: Inverts a double-precision complex upper triangular matrix !> \brief elpa_invert_trm_complex_double_impl: Inverts a double-precision complex upper triangular matrix
!> \details !> \details
!> \param na Order of matrix !> \param obj elpa_t object contains:
!> \param a(lda,matrixCols) Distributed matrix which should be inverted !> \param - obj%na Order of matrix
!> Distribution is like in Scalapack. !> \param - obj%local_nrows Leading dimension of a
!> Only upper triangle needs to be set. !> \param - obj%local_ncols local columns of matrix a
!> The lower triangle is not referenced. !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param lda Leading dimension of a !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param matrixCols local columns of matrix a !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param - obj%wantDebug logical, more debug information on failure
!> \param mpi_comm_rows MPI communicator for rows !> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> \param mpi_comm_cols MPI communicator for columns !> Distribution is like in Scalapack.
!> \param wantDebug logical, more debug information on failure !> Only upper triangle needs to be set.
!> \result succes logical, reports success or failure !> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_complex_double_impl(obj, a) result(success) function elpa_invert_trm_complex_double_impl(obj, a) result(success)
#include "elpa_invert_trm.X90" #include "elpa_invert_trm.X90"
end function elpa_invert_trm_complex_double_impl end function elpa_invert_trm_complex_double_impl
...@@ -253,19 +256,19 @@ module elpa1_auxiliary_impl ...@@ -253,19 +256,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_invert_trm_complex_single_impl: Inverts a single-precision complex upper triangular matrix !> \brief elpa_invert_trm_complex_single_impl: Inverts a single-precision complex upper triangular matrix
!> \details !> \details
!> \param na Order of matrix !> \param obj elpa_t object contains:
!> \param a(lda,matrixCols) Distributed matrix which should be inverted !> \param - obj%na Order of matrix
!> Distribution is like in Scalapack. !> \param - obj%local_nrows Leading dimension of a
!> Only upper triangle needs to be set. !> \param - obj%local_ncols local columns of matrix a
!> The lower triangle is not referenced. !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param lda Leading dimension of a !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param matrixCols local columns of matrix a !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param - obj%wantDebug logical, more debug information on failure
!> \param mpi_comm_rows MPI communicator for rows !> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> \param mpi_comm_cols MPI communicator for columns !> Distribution is like in Scalapack.
!> \param wantDebug logical, more debug information on failure !> Only upper triangle needs to be set.
!> \result succes logical, reports success or failure !> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_complex_single_impl(obj, a) result(success) function elpa_invert_trm_complex_single_impl(obj, a) result(success)
#include "elpa_invert_trm.X90" #include "elpa_invert_trm.X90"
end function elpa_invert_trm_complex_single_impl end function elpa_invert_trm_complex_single_impl
...@@ -433,21 +436,20 @@ module elpa1_auxiliary_impl ...@@ -433,21 +436,20 @@ module elpa1_auxiliary_impl
!> \brief elpa_solve_tridi_double_impl: Solve tridiagonal eigensystem for a double-precision matrix with divide and conquer method !> \brief elpa_solve_tridi_double_impl: Solve tridiagonal eigensystem for a double-precision matrix with divide and conquer method
!> \details !> \details
!> !> \param obj elpa_t object contains:
!> \param na Matrix dimension !> \param - obj%na Order of matrix
!> \param nev number of eigenvalues/vectors to be computed !> \param - obj%nev number of eigenvalues/vectors to be computed
!> \param d array d(na) on input diagonal elements of tridiagonal matrix, on !> \param - obj%local_nrows Leading dimension of q
!> output the eigenvalues in ascending order !> \param - obj%local_ncols local columns of matrix q
!> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param ldq leading dimension of matrix q !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param - obj%wantDebug logical, more debug information on failure
!> \param matrixCols columns of matrix q !> \param d array d(na) on input diagonal elements of tridiagonal matrix, on
!> \param mpi_comm_rows MPI communicator for rows !> output the eigenvalues in ascending order
!> \param mpi_comm_cols MPI communicator for columns !> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed
!> \param wantDebug logical, give more debug information if .true. !> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors
!> \result success logical, .true. on success, else .false. !> \result succes logical, reports success or failure
function elpa_solve_tridi_double_impl(obj, d, e, q) result(success) function elpa_solve_tridi_double_impl(obj, d, e, q) result(success)
#include "elpa_solve_tridi_impl_public.X90" #include "elpa_solve_tridi_impl_public.X90"
...@@ -463,21 +465,20 @@ module elpa1_auxiliary_impl ...@@ -463,21 +465,20 @@ module elpa1_auxiliary_impl
!> \brief elpa_solve_tridi_single_impl: Solve tridiagonal eigensystem for a single-precision matrix with divide and conquer method !> \brief elpa_solve_tridi_single_impl: Solve tridiagonal eigensystem for a single-precision matrix with divide and conquer method
!> \details !> \details
!> !> \param obj elpa_t object contains:
!> \param na Matrix dimension !> \param - obj%na Order of matrix
!> \param nev number of eigenvalues/vectors to be computed !> \param - obj%nev number of eigenvalues/vectors to be computed
!> \param d array d(na) on input diagonal elements of tridiagonal matrix, on !> \param - obj%local_nrows Leading dimension of q
!> output the eigenvalues in ascending order !> \param - obj%local_ncols local columns of matrix q
!> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed !> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors !> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param ldq leading dimension of matrix q !> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param nblk blocksize of cyclic distribution, must be the same in both directions! !> \param - obj%wantDebug logical, more debug information on failure
!> \param matrixCols columns of matrix q !> \param d array d(na) on input diagonal elements of tridiagonal matrix, on
!> \param mpi_comm_rows MPI communicator for rows !> output the eigenvalues in ascending order
!> \param mpi_comm_cols MPI communicator for columns !> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed
!> \param wantDebug logical, give more debug information if .true. !> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors
!> \result success logical, .true. on success, else .false. !> \result succes logical, reports success or failure
function elpa_solve_tridi_single_impl(obj, d, e, q) result(success) function elpa_solve_tridi_single_impl(obj, d, e, q) result(success)
#include "elpa_solve_tridi_impl_public.X90" #include "elpa_solve_tridi_impl_public.X90"
......
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment