Commit 7c8c3cbd authored by Andreas Marek's avatar Andreas Marek
Browse files

Merge branch 'master' into ELPA_GPU

parents 148ab471 3c8498f7
.TH "elpa_invert_trm_complex" 3 "Wed Sept 28 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_invert_trm_complex \- Invert a upper triangular matrix
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBelpa_invert_trm_complex\fP (na, a, lda, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, wantDebug)"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: Order of matrix \fBa\fP"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBldaCols\fP. Only upper triangule needs to be set. The lower triangle is not referenced"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "logical, intent(in) \fBwantDebug\fP: if .true. , print more debug information in case of an error"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
.RI "\fBint\fP success = \fBelpa_invert_trm_complex\fP (\fBint\fP na, \fB double complex *\fPa, \fBint\fP lda, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP wantDebug);"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "int \fBna\fP: Order of matrix \fBa\fP"
.br
.RI "double complex *\fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP. Only upper triangule needs to be set. The lower triangle is not referenced"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "int \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBwantDebug\fP: give more debugging information"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Inverts a upper triangular matrix a. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_invert_trm_real\fP(3)
.TH "elpa_invert_trm_real" 3 "Wed Sept 28 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_invert_trm_real \- Invert a upper triangular matrix
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBelpa_invert_trm_real\fP (na, a, lda, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, wantDebug)"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: Order of matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBldaCols\fP. Only upper triangule needs to be set. The lower triangle is not referenced"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBget_elpa_communicators\fP(3)"
.br
.RI "logical, intent(in) \fBwantDebug\fP: if .true. , print more debug information in case of an error"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
.RI "\fBint\fP success = \fBelpa_invert_trm_real\fP (\fBint\fP na, \fB double *\fPa, \fBint\fP lda, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP wantDebug);"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "int \fBna\fP: Order of matrix \fBa\fP"
.br
.RI "double *\fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP. Only upper triangule needs to be set. The lower triangle is not referenced"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "int \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBwantDebug\fP: give more debugging information"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Inverts a upper triangular matrix a. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_invert_trm_complex\fP(3)
.TH "elpa_mult_ah_b_complex" 3 "Wed Sept 28 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_mult_ah_b_complex \- Performs C = herm_transpose(A) * B
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBelpa_mult_ah_b_complex\fP (uplo_a, uplo_c, na, ncb, a, lda, ldaCols, b, ldb, ldbCols, nblk, mpi_comm_rows, mpi_comm_cols, c, ldc, ldcCols)"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "character, intent(in) \fBuplo_a\fP: 'U' if \fBa\fP is upper triangular, 'L' if \fBa\fP is lower triangular, anything else if \fBa\fP is a full matrix
.br
.RI "character, intent(in) \fBuplo_c\fP: 'U' if only the upper diagonal part of \fBc\fP is needed, 'L' if only the lower diagonal part of \fBc\fP is needed, anything else if the full matrix \fBc\fP is needed
.br
.RI "integer, intent(in) \fBna\fP: Number of rows/columns of \fBa\fP, number of rows of \fBb\fP and \fBc\fP"
.br
.RI "integer, intent(in) \fBncb\fP: Number of /columns of \fBb\fP and \fBc\fP"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBldaCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "integer, intent(in) \fBldaCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "complex*16, intent(inout) \fBb\fP: locally distributed part of the matrix \fBb\fP. The local dimensions are \fBldb\fP x \fBldbCols\fP"
.br
.RI "integer, intent(in) \fBldb\fP: leading dimension of locally distributed matrix \fBb\fP"
.br
.RI "integer, intent(in) \fBldbCols\fP: number of columns of locally distributed matrices \fBb\fP"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "complex*16, intent(inout) \fBc\fP: locally distributed part of the matrix \fBc\fP. The local dimensions are \fBldc\fP x \fBldcCols\fP"
.br
.RI "integer, intent(in) \fBldc\fP: leading dimension of locally distributed matrix \fBc\fP"
.br
.RI "integer, intent(in) \fBldcCols\fP: number of columns of locally distributed matrices \fBc\fP"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
#include <complex.h>
.br
.RI "\fBint\fP success = \fBelpa_mult_ah_b_complex\fP (\fBchar\fP uplo_a, \fBchar\fP uplo_c, \fBint\fP na, \fBint\fP ncb, \fB double complex *\fPa, \fBint\fP lda, \fBint\fP ldaCols, \fB double complex *\fPb, \fBint\fP ldb, \fBint\fP ldbCols, \fBint\fP nblk, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fB double complex *\fPc, \fBint\fP lc, \fBint\fP ldcCols );"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "char \fBuplo_a\fP: 'U' if \fBa\fP is upper triangular, 'L' if \fBa\fP is lower triangular, anything else if \fBa\fP is a full matrix
.br
.RI "char \fBuplo_c\fP: 'U' if only the upper diagonal part of \fBc\fP is needed, 'L' if only the lower diagonal part of \fBc\fP is needed, anything else if the full matrix \fBc\fP is needed
.br
.RI "int \fBna\fP: Number of rows/columns of \fBa\fP, number of rows of \fBb\fP and \fBc\fP"
.br
.RI "int \fBncb\fP: Number of /columns of \fBb\fP and \fBc\fP"
.br
.RI "double complex *\fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBldaCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "int \fBldaCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "double complex *\fBb\fP: locally distributed part of the matrix \fBb\fP. The local dimensions are \fBldb\fP x \fBldbCols\fP"
.br
.RI "int \fBldb\fP: leading dimension of locally distributed matrix \fBb\fP"
.br
.RI "int \fBldbCols\fP: number of columns of locally distributed matrices \fBb\fP"
.br
.RI "int \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "double complex *\fBc\fP: locally distributed part of the matrix \fBc\fP. The local dimensions are \fBldc\fP x \fBldcCols\fP"
.br
.RI "int \fBldc\fP: leading dimension of locally distributed matrix \fBc\fP"
.br
.RI "int \fBldcCols\fP: number of columns of locally distributed matrices \fBc\fP"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Does c = herm_transpose(a) * b. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_mult_at_b_real\fP(3)
.TH "elpa_mult_at_b_real" 3 "Wed Sept 28 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_mult_at_b_real \- Performs C = transpose(A) * B
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBelpa_mult_at_b_real\fP (uplo_a, uplo_c, na, ncb, a, lda, ldaCols, b, ldb, ldbCols, nblk, mpi_comm_rows, mpi_comm_cols, c, ldc, ldcCols)"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "character, intent(in) \fBuplo_a\fP: 'U' if \fBa\fP is upper triangular, 'L' if \fBa\fP is lower triangular, anything else if \fBa\fP is a full matrix
.br
.RI "character, intent(in) \fBuplo_c\fP: 'U' if only the upper diagonal part of \fBc\fP is needed, 'L' if only the lower diagonal part of \fBc\fP is needed, anything else if the full matrix \fBc\fP is needed
.br
.RI "integer, intent(in) \fBna\fP: Number of rows/columns of \fBa\fP, number of rows of \fBb\fP and \fBc\fP"
.br
.RI "integer, intent(in) \fBncb\fP: Number of /columns of \fBb\fP and \fBc\fP"
.br
.RI "real*8, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBldaCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "integer, intent(in) \fBldaCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "real*8, intent(inout) \fBb\fP: locally distributed part of the matrix \fBb\fP. The local dimensions are \fBldb\fP x \fBldbCols\fP"
.br
.RI "integer, intent(in) \fBldb\fP: leading dimension of locally distributed matrix \fBb\fP"
.br
.RI "integer, intent(in) \fBldbCols\fP: number of columns of locally distributed matrices \fBb\fP"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "real*8, intent(inout) \fBc\fP: locally distributed part of the matrix \fBc\fP. The local dimensions are \fBldc\fP x \fBldcCols\fP"
.br
.RI "integer, intent(in) \fBldc\fP: leading dimension of locally distributed matrix \fBc\fP"
.br
.RI "integer, intent(in) \fBldcCols\fP: number of columns of locally distributed matrices \fBc\fP"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
.RI "\fBint\fP success = \fBelpa_mult_at_b_real\fP (\fBchar\fP uplo_a, \fBchar\fP uplo_c, \fBint\fP na, \fBint\fP ncb, \fB double *\fPa, \fBint\fP lda, \fBint\fP ldaCols, \fB double *\fPb, \fBint\fP ldb, \fBint\fP ldbCols, \fBint\fP nblk, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fB double *\fPc, \fBint\fP lc, \fBint\fP ldcCols );"
.br
.RI " "
.br
.RI "With the definitions of the input and output variables:"
.br
.RI "char \fBuplo_a\fP: 'U' if \fBa\fP is upper triangular, 'L' if \fBa\fP is lower triangular, anything else if \fBa\fP is a full matrix
.br
.RI "char \fBuplo_c\fP: 'U' if only the upper diagonal part of \fBc\fP is needed, 'L' if only the lower diagonal part of \fBc\fP is needed, anything else if the full matrix \fBc\fP is needed
.br
.RI "int \fBna\fP: Number of rows/columns of \fBa\fP, number of rows of \fBb\fP and \fBc\fP"
.br
.RI "int \fBncb\fP: Number of /columns of \fBb\fP and \fBc\fP"
.br
.RI "double *\fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBldaCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "int \fBldaCols\fP: number of columns of locally distributed matrices \fBa\fP"
.br
.RI "double *\fBb\fP: locally distributed part of the matrix \fBb\fP. The local dimensions are \fBldb\fP x \fBldbCols\fP"
.br
.RI "int \fBldb\fP: leading dimension of locally distributed matrix \fBb\fP"
.br
.RI "int \fBldbCols\fP: number of columns of locally distributed matrices \fBb\fP"
.br
.RI "int \fBnblk\fP: blocksize of cyclic distribution, must be the same in both directions"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "double *\fBc\fP: locally distributed part of the matrix \fBc\fP. The local dimensions are \fBldc\fP x \fBldcCols\fP"
.br
.RI "int \fBldc\fP: leading dimension of locally distributed matrix \fBc\fP"
.br
.RI "int \fBldcCols\fP: number of columns of locally distributed matrices \fBc\fP"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Does c = transpose(a) * b. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_mult_ah_b_complex\fP(3)
.TH "elpa_solve_evp_complex" 3 "Mon Oct 10 2015" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_solve_evp_complex \- solve the complex eigenvalue problem with either the 1-satge or the 2-stage ELPA solver
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa
.br
.br
.RI "success = \fBelpa_solve_evp_complex\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, THIS_COMPLEX_ELPA_KERNEL=THIS_COMPLEX_ELPA_KERNEL, method=method)"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "complex*16, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "int \fBTHIS_ELPA_COMPLEX_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "character(*), optional \fBmethod\fP: use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto" "
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
#include <complex.h>
.br
.RI "success = \fBelpa_solve_evp_complex\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_COMPLEX_KERNEL \fB char *\fPmethod);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double complex *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double complex *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "int \fBTHIS_ELPA_COMPLEX_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "char *\fBmethod\fP: use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto" "
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Solve the complex eigenvalue problem. The value of \fBmethod\fP desides whether the 1stage or 2stage solver is used. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa2_print_kernels\fP(1)
.TH "elpa_solve_evp_complex_1stage" 3 "Tue Oct 18 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_solve_evp_complex_1stage \- solve the complex eigenvalue problem with the 1-stage ELPA solver
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa1
.br
.br
.RI "success = \fBelpa_solve_evp_complex_1stage\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols)"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "complex*16, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "complex*16, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
#include <complex.h>
.br
.RI "success = \fBelpa_solve_evp_complex_1stage\fP (\fBint\fP na, \fBint\fP nev, \fB double complex *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble complex*\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double complex *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double complex *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Solve the complex eigenvalue problem with the 1-stage solver. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
The interface \fBelpa_solve_evp_complex\fP(3) is a more flexible alternative.
.br
.SH "SEE ALSO"
\fBelpa_get_communicators\fP(3) \fBelpa_solve_evp_real\fP(3) \fBelpa_solve_evp_complex\fP(3) \fBelpa_solve_evp_real_1stage\fP(3) \fBelpa_solve_evp_real_2stage\fP(3) \fBelpa_solve_evp_complex_2stage\fP(3) \fBelpa2_print_kernels\fP(1)
.TH "elpa_solve_evp_real" 3 "Mon Oct 10 2016" "ELPA" \" -*- nroff -*-
.ad l
.nh
.SH NAME
elpa_solve_evp_real \- solve the real eigenvalue problem
.br
.SH SYNOPSIS
.br
.SS FORTRAN INTERFACE
use elpa
.br
.br
.RI "success = \fBelpa_solve_evp_real\fP (na, nev, a(lda,matrixCols), ev(nev), q(ldq, matrixCols), ldq, nblk, matrixCols, mpi_comm_rows, mpi_comm_cols, mpi_comm_all, THIS_REAL_ELPA_KERNEL=THIS_REAL_ELPA_KERNEL, useQr=useQR, method=method)"
.br
.RI " "
.br
.RI "Generalized interface to the ELPA 1stage and 2stage solver for real-valued problems"
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "integer, intent(in) \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "integer, intent(in) \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "real*8, intent(inout) \fBa\fP: locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "integer, intent(in) \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "real*8, intent(inout) \fBev\fP: on output the first \fBnev\fP computed eigenvalues"
.br
.RI "real*8, intent(inout) \fBq\fP: on output the first \fBnev\fP computed eigenvectors"
.br
.RI "integer, intent(in) \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "integer, intent(in) \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "integer, intent(in) \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "integer, intent(in) \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "integer, intent(in) \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "integer, intent(in), optional: \fPTHIS_REAL_ELPA_KERNEL\fB: optional argument, choose the compute kernel for 2-stage solver"
.br
.RI "logical, intent(in), optional: \fBuseQR\fP: optional argument; switches to QR-decomposition if set to .true."
.br
.RI "character(*), optional \fBmethod\fP: use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto" "
.RI "logical \fBsuccess\fP: return value indicating success or failure"
.br
.SS C INTERFACE
#include "elpa.h"
.br
.RI "success = \fBelpa_solve_evp_real\fP (\fBint\fP na, \fBint\fP nev, \fB double *\fPa, \fBint\fP lda, \fB double *\fPev, \fBdouble *\fPq, \fBint\fP ldq, \fBint\fP nblk, \fBint\fP matrixCols, \fBint\fP mpi_comm_rows, \fBint\fP mpi_comm_cols, \fBint\fP mpi_comm_all, \fBint\fP THIS_ELPA_REAL_KERNEL, \fBint\fP useQr, \fbchar *\fPmethod);"
.br
.RI " "
.br
.RI "With the definintions of the input and output variables:"
.br
.RI "int \fBna\fP: global dimension of quadratic matrix \fBa\fP to solve"
.br
.RI "int \fBnev\fP: number of eigenvalues to be computed; the first \fBnev\fP eigenvalules are calculated"
.br
.RI "double *\fBa\fP: pointer to locally distributed part of the matrix \fBa\fP. The local dimensions are \fBlda\fP x \fBmatrixCols\fP"
.br
.RI "int \fBlda\fP: leading dimension of locally distributed matrix \fBa\fP"
.br
.RI "double *\fBev\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvalues"
.br
.RI "double *\fBq\fP: pointer to memory containing on output the first \fBnev\fP computed eigenvectors"
.br
.RI "int \fBldq\fP: leading dimension of matrix \fBq\fP which stores the eigenvectors"
.br
.RI "int \fBnblk\fP: blocksize of block cyclic distributin, must be the same in both directions"
.br
.RI "int \fBmatrixCols\fP: number of columns of locally distributed matrices \fBa\fP and \fBq\fP"
.br
.RI "int \fBmpi_comm_rows\fP: communicator for communication in rows. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_cols\fP: communicator for communication in colums. Constructed with \fBelpa_get_communicators\fP(3)"
.br
.RI "int \fBmpi_comm_all\fP: communicator for all processes in the processor set involved in ELPA"
.br
.RI "int \fBTHIS_ELPA_REAL_KERNEL\fp: choose the compute kernel for 2-stage solver"
.br
.RI "int \fBuseQR\fP: if set to 1 switch to QR-decomposition"
.br
.RI "char *\fBmethod\fP: use 1stage solver if "1stage", use 2stage solver if "2stage", (at the moment) use 2stage solver if "auto" "
.RI "int \fBsuccess\fP: return value indicating success (1) or failure (0)
.SH DESCRIPTION
Solve the real eigenvalue problem. The value of \fBmethod\fP desides whether the 1stage or 2stage solver is used. The ELPA communicators \fBmpi_comm_rows\fP and \fBmpi_comm_cols\fP are obtained with the \fBelpa_get_communicators\fP(3) function. The distributed quadratic marix \fBa\fP has global dimensions \fBna\fP x \fBna\fP, and a local size \fBlda\fP x \fBmatrixCols\fP. The solver will compute the first \fBnev\fP eigenvalues, which will be stored on exit in \fBev\fP. The eigenvectors corresponding to the eigenvalues will be stored in \fBq\fP. All memory of the arguments must be allocated outside the call to the solver.
.br
.SH "SEE ALSO"