Commit de0fabbc authored by Andreas Marek's avatar Andreas Marek
Browse files

Update doxygen documentation for ELPA 1stage

parent b67a04fd
......@@ -134,43 +134,6 @@ module elpa1_impl
public :: elpa_cholesky_complex_single_impl !< Cholesky factorization of a single-precision complex matrix
#endif
! Timing results, set by every call to solve_evp_xxx
!> \brief elpa_solve_evp_real_1stage_double_impl: Fortran function to solve the real eigenvalue problem with 1-stage solver. This is called by "elpa_solve_evp_real"
!>
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!>
!> \result success
contains
!-------------------------------------------------------------------------------
......@@ -217,40 +180,33 @@ end function elpa_get_communicators_impl
!> \brief elpa_solve_evp_real_1stage_double_impl: Fortran function to solve the real double-precision eigenvalue problem with 1-stage solver
!>
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI communicator
!> \param useGPU use GPU version (.true. or .false.)
!>
!> \result success
!> \details
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%nev number of eigenvalues/vectors to be computed
!> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix q
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%mpi_comm_parent MPI communicator for columns
!> \param - obj%gpu use GPU version (1 or 0)
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!>
!> \result success
#define REALCASE 1
#define DOUBLE_PRECISION 1
#include "../general/precision_macros.h"
......@@ -260,40 +216,33 @@ end function elpa_get_communicators_impl
#ifdef WANT_SINGLE_PRECISION_REAL
!> \brief elpa_solve_evp_real_1stage_single_impl: Fortran function to solve the real single-precision eigenvalue problem with 1-stage solver
!>
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI commuicator
!> \param useGPU
!>
!> \result success
!> \details
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%nev number of eigenvalues/vectors to be computed
!> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix q
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%mpi_comm_parent MPI communicator for columns
!> \param - obj%gpu use GPU version (1 or 0)
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!>
!> \result success
#define REALCASE 1
#define SINGLE_PRECISION 1
......@@ -304,40 +253,33 @@ end function elpa_get_communicators_impl
#endif /* WANT_SINGLE_PRECISION_REAL */
!> \brief elpa_solve_evp_complex_1stage_double_impl: Fortran function to solve the complex double-precision eigenvalue problem with 1-stage solver
!>
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI Communicator
!> \param useGPU use GPU version (.true. or .false.)
!>
!> \result success
!> \details
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%nev number of eigenvalues/vectors to be computed
!> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix q
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%mpi_comm_parent MPI communicator for columns
!> \param - obj%gpu use GPU version (1 or 0)
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!>
!> \result success
#define COMPLEXCASE 1
#define DOUBLE_PRECISION 1
#include "../general/precision_macros.h"
......@@ -349,40 +291,33 @@ end function elpa_get_communicators_impl
#ifdef WANT_SINGLE_PRECISION_COMPLEX
!> \brief elpa_solve_evp_complex_1stage_single_impl: Fortran function to solve the complex single-precision eigenvalue problem with 1-stage solver
!>
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!> \param mpi_comm_all global MPI communicator
!> \param useGPU
!>
!> \result success
!> \details
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%nev number of eigenvalues/vectors to be computed
!> The smallest nev eigenvalues/eigenvectors are calculated.
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix q
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%mpi_comm_parent MPI communicator for columns
!> \param - obj%gpu use GPU version (1 or 0)
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!>
!> \result success
#define COMPLEXCASE 1
#define SINGLE_PRECISION
......
......@@ -118,18 +118,19 @@ module elpa1_auxiliary_impl
#include "../general/precision_macros.h"
!> \brief elpa_invert_trm_real_double: Inverts a double-precision real upper triangular matrix
!> \details
!> \param na Order of matrix
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \param lda Leading dimension of a
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param matrixCols local columns of matrix a
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, more debug information on failure
!> \result succes logical, reports success or failure
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix a
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_real_double_impl(obj, a) result(success)
#include "elpa_invert_trm.X90"
end function elpa_invert_trm_real_double_impl
......@@ -143,18 +144,20 @@ module elpa1_auxiliary_impl
!> \brief elpa_invert_trm_real_single_impl: Inverts a single-precision real upper triangular matrix
!> \details
!> \param na Order of matrix
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \param lda Leading dimension of a
!> \param matrixCols local columns of matrix a
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, more debug information on failure
!> \result succes logical, reports success or failure
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix a
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_real_single_impl(obj, a) result(success)
#include "elpa_invert_trm.X90"
end function elpa_invert_trm_real_single_impl
......@@ -170,19 +173,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_cholesky_complex_double_impl: Cholesky factorization of a double-precision complex hermitian matrix
!> \details
!> \param na Order of matrix
!> \param a(lda,matrixCols) Distributed matrix which should be factorized.
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> On return, the upper triangle contains the Cholesky factor
!> and the lower triangle is set to 0.
!> \param lda Leading dimension of a
!> \param matrixCols local columns of matrix a
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, more debug information on failure
!> \result succes logical, reports success or failure
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix a
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_cholesky_complex_double_impl(obj, a) result(success)
#include "elpa_cholesky_template.X90"
......@@ -198,19 +201,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_cholesky_complex_single_impl: Cholesky factorization of a single-precision complex hermitian matrix
!> \details
!> \param na Order of matrix
!> \param a(lda,matrixCols) Distributed matrix which should be factorized.
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> On return, the upper triangle contains the Cholesky factor
!> and the lower triangle is set to 0.
!> \param lda Leading dimension of a
!> \param matrixCols local columns of matrix a
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, more debug information on failure
!> \result succes logical, reports success or failure
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix a
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_cholesky_complex_single_impl(obj, a) result(success)
#include "elpa_cholesky_template.X90"
......@@ -227,19 +230,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_invert_trm_complex_double_impl: Inverts a double-precision complex upper triangular matrix
!> \details
!> \param na Order of matrix
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \param lda Leading dimension of a
!> \param matrixCols local columns of matrix a
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, more debug information on failure
!> \result succes logical, reports success or failure
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix a
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_complex_double_impl(obj, a) result(success)
#include "elpa_invert_trm.X90"
end function elpa_invert_trm_complex_double_impl
......@@ -253,19 +256,19 @@ module elpa1_auxiliary_impl
!> \brief elpa_invert_trm_complex_single_impl: Inverts a single-precision complex upper triangular matrix
!> \details
!> \param na Order of matrix
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \param lda Leading dimension of a
!> \param matrixCols local columns of matrix a
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, more debug information on failure
!> \result succes logical, reports success or failure
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%local_nrows Leading dimension of a
!> \param - obj%local_ncols local columns of matrix a
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param a(lda,matrixCols) Distributed matrix which should be inverted
!> Distribution is like in Scalapack.
!> Only upper triangle needs to be set.
!> The lower triangle is not referenced.
!> \result succes logical, reports success or failure
function elpa_invert_trm_complex_single_impl(obj, a) result(success)
#include "elpa_invert_trm.X90"
end function elpa_invert_trm_complex_single_impl
......@@ -433,21 +436,20 @@ module elpa1_auxiliary_impl
!> \brief elpa_solve_tridi_double_impl: Solve tridiagonal eigensystem for a double-precision matrix with divide and conquer method
!> \details
!>
!> \param na Matrix dimension
!> \param nev number of eigenvalues/vectors to be computed
!> \param d array d(na) on input diagonal elements of tridiagonal matrix, on
!> output the eigenvalues in ascending order
!> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed
!> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors
!> \param ldq leading dimension of matrix q
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param matrixCols columns of matrix q
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, give more debug information if .true.
!> \result success logical, .true. on success, else .false.
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%nev number of eigenvalues/vectors to be computed
!> \param - obj%local_nrows Leading dimension of q
!> \param - obj%local_ncols local columns of matrix q
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param d array d(na) on input diagonal elements of tridiagonal matrix, on
!> output the eigenvalues in ascending order
!> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed
!> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors
!> \result succes logical, reports success or failure
function elpa_solve_tridi_double_impl(obj, d, e, q) result(success)
#include "elpa_solve_tridi_impl_public.X90"
......@@ -463,21 +465,20 @@ module elpa1_auxiliary_impl
!> \brief elpa_solve_tridi_single_impl: Solve tridiagonal eigensystem for a single-precision matrix with divide and conquer method
!> \details
!>
!> \param na Matrix dimension
!> \param nev number of eigenvalues/vectors to be computed
!> \param d array d(na) on input diagonal elements of tridiagonal matrix, on
!> output the eigenvalues in ascending order
!> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed
!> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors
!> \param ldq leading dimension of matrix q
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param matrixCols columns of matrix q
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param wantDebug logical, give more debug information if .true.
!> \result success logical, .true. on success, else .false.
!> \param obj elpa_t object contains:
!> \param - obj%na Order of matrix
!> \param - obj%nev number of eigenvalues/vectors to be computed
!> \param - obj%local_nrows Leading dimension of q
!> \param - obj%local_ncols local columns of matrix q
!> \param - obj%nblk blocksize of cyclic distribution, must be the same in both directions!
!> \param - obj%mpi_comm_rows MPI communicator for rows
!> \param - obj%mpi_comm_cols MPI communicator for columns
!> \param - obj%wantDebug logical, more debug information on failure
!> \param d array d(na) on input diagonal elements of tridiagonal matrix, on
!> output the eigenvalues in ascending order
!> \param e array e(na) on input subdiagonal elements of matrix, on exit destroyed
!> \param q on exit : matrix q(ldq,matrixCols) contains the eigenvectors
!> \result succes logical, reports success or failure
function elpa_solve_tridi_single_impl(obj, d, e, q) result(success)
#include "elpa_solve_tridi_impl_public.X90"
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment