Scheduled maintenance on Monday 2019-06-24 between 10:00-11:00 CEST

Prepare release candidate rc1 of ELPA-2019.05.001

- remove deprecated features
- update ABI and API versioning
parent ea9080bc
This diff is collapsed.
......@@ -11,7 +11,16 @@ Changelog for ELPA 2019.05.001.rc1
- allow measurements with the likwid tool
- users can define the default-kernel at build time
- ELPA versioning number is provided in the C header files
- as announced a year ago, the following deprecated routines have been finally
removed; see DEPRECATED_FEATURES for the replacement routines , which have
been introduced a year ago. Removed routines:
-> mult_at_b_real
-> mult_ah_b_complex
-> invert_trm_real
-> invert_trm_complex
-> cholesky_real
-> cholesky_complex
-> solve_tridi
Changelog for ELPA 2018.11.001
......
......@@ -26,16 +26,17 @@ have been replaced by new names. The old interfaces will be removed
| get_elpa_communicators | elpa_get_communicators | (removed since 2017.11.001) |
| solve_evp_real | elpa_solve_evp_real_1stage_double | (removed since 2017.11.001) |
| solve_evp_complex | elpa_solve_evp_complex_1stage_double | (removed since 2017.11.001) |
| solve_evp_real_1stage | elpa_solve_evp_real_1stage_double | will be removed 2018.11.001 |
| solve_evp_complex_1stage | elpa_solve_evp_complex_1stage_double | will be removed 2018.11.001 |
| solve_evp_real_2stage | elpa_solve_evp_real_2stage_double | will be removed 2018.11.001 |
| solve_evp_complex_2stage | elpa_solve_evp_complex_2stage_double | will be removed 2018.11.001 |
| mult_at_b_real | elpa_mult_at_b_real_double | will be removed 2018.11.001 |
| mult_ah_b_complex | elpa_mult_ah_b_complex_double | will be removed 2018.11.001 |
| invert_trm_real | elpa_invert_trm_real_double | will be removed 2018.11.001 |
| invert_trm_complex | elpa_invert_trm_complex_double | will be removed 2018.11.001 |
| cholesky_real | elpa_cholesky_real_double | will be removed 2018.11.001 |
| cholesky_complex | elpa_cholesky_complex_double | will be removed 2018.11.001 |
| solve_evp_real_1stage | elpa_solve_evp_real_1stage_double | (removed since 2019.05.001) |
| solve_evp_complex_1stage | elpa_solve_evp_complex_1stage_double | (removed since 2019.05.001) |
| solve_evp_real_2stage | elpa_solve_evp_real_2stage_double | (removed since 2019.05.001) |
| solve_evp_complex_2stage | elpa_solve_evp_complex_2stage_double | (removed since 2019.05.001) |
| mult_at_b_real | elpa_mult_at_b_real_double | (removed since 2019.05.001) |
| mult_ah_b_complex | elpa_mult_ah_b_complex_double | (removed since 2019.05.001) |
| invert_trm_real | elpa_invert_trm_real_double | (removed since 2019.05.001) |
| invert_trm_complex | elpa_invert_trm_complex_double | (removed since 2019.05.001) |
| cholesky_real | elpa_cholesky_real_double | (removed since 2019.05.001) |
| cholesky_complex | elpa_cholesky_complex_double | (removed since 2019.05.001) |
| solve_tridi | elpa_solve_tridi_double | (removed since 2019.05.001) |
For all symbols also the corresponding "_single" routines are available
......
......@@ -8,6 +8,7 @@ For detailed information about changes since release ELPA 2018.11 please have a
- ELPA VERSION number is exported to the C-header
- C functions can have an optional error argument, if compiler supports this
=> ABI and API change
- as anounced, removal of deprecated routines
ABI change
---------------------
......@@ -17,11 +18,11 @@ Since release 2018.10.001 the ABI has changed.
Any incompatibilities to previous version?
---------------------------------------
For Fortran:
Break of ABI compatibility, since all functions obtained an optional, integer
argument of the error code.
Break of ABI compatibility, since all routines announced as deperecated have been removed
For C:
Break of ABI and API compatibility, since all functions obtained a required int* argument of the error code.
......@@ -451,21 +451,40 @@ for comp, s, a in product(
projectBinary="test_real2"
if (comp == "intel"):
print(" - ./ci_test_scripts/run_project_tests.sh -c \" FC=mpiifort FCFLAGS=\\\"-march=native \\\" CFLAGS=\\\"-march=native\\\" \
SCALAPACK_LDFLAGS=\\\"$MKL_INTEL_SCALAPACK_LDFLAGS_MPI_NO_OMP\\\" \
SCALAPACK_FCFLAGS=\\\"$MKL_INTEL_SCALAPACK_FCFLAGS_MPI_NO_OMP\\\" \
--enable-option-checking=fatal --disable-avx2 --prefix=$PWD/installdest --disable-avx2 || { cat config.log; exit 1; } \" \
-t 2 -m 150 -n 50 -b 16 -S $SLURM -p test_project_"+stage[s]+api[a]+" -e "+projectBinary+" \
-C \" FC=mpiifort PKG_CONFIG_PATH=../../installdest/lib/pkgconfig \
--enable-option-checking=fatal || { cat config.log; exit 1; } \" ")
if (a == "new_api"):
print(" - ./ci_test_scripts/run_project_tests.sh -c \" FC=mpiifort FCFLAGS=\\\"-march=native \\\" CFLAGS=\\\"-march=native\\\" \
SCALAPACK_LDFLAGS=\\\"$MKL_INTEL_SCALAPACK_LDFLAGS_MPI_NO_OMP\\\" \
SCALAPACK_FCFLAGS=\\\"$MKL_INTEL_SCALAPACK_FCFLAGS_MPI_NO_OMP\\\" \
--enable-option-checking=fatal --disable-avx2 --prefix=$PWD/installdest --disable-avx2 || { cat config.log; exit 1; } \" \
-t 2 -m 150 -n 50 -b 16 -S $SLURM -p test_project_"+stage[s]+api[a]+" -e "+projectBinary+" \
-C \" FC=mpiifort PKG_CONFIG_PATH=../../installdest/lib/pkgconfig \
--enable-option-checking=fatal || { cat config.log; exit 1; } \" ")
if (a == "legacy_api"):
print(" - ./ci_test_scripts/run_project_tests.sh -c \" FC=mpiifort FCFLAGS=\\\"-march=native \\\" CFLAGS=\\\"-march=native\\\" \
SCALAPACK_LDFLAGS=\\\"$MKL_INTEL_SCALAPACK_LDFLAGS_MPI_NO_OMP\\\" \
SCALAPACK_FCFLAGS=\\\"$MKL_INTEL_SCALAPACK_FCFLAGS_MPI_NO_OMP\\\" \
--enable-option-checking=fatal --enable-legacy-interface --disable-avx2 --prefix=$PWD/installdest --disable-avx2 || { cat config.log; exit 1; } \" \
-t 2 -m 150 -n 50 -b 16 -S $SLURM -p test_project_"+stage[s]+api[a]+" -e "+projectBinary+" \
-C \" FC=mpiifort PKG_CONFIG_PATH=../../installdest/lib/pkgconfig \
--enable-option-checking=fatal || { cat config.log; exit 1; } \" ")
if (comp == "gnu"):
print(" - ./ci_test_scripts/run_project_tests.sh -c \" FC=mpif90 FCFLAGS=\\\"-march=native \\\" CFLAGS=\\\"-march=native\\\" \
SCALAPACK_LDFLAGS=\\\"$MKL_GFORTRAN_SCALAPACK_LDFLAGS_MPI_NO_OMP\\\" \
SCALAPACK_FCFLAGS=\\\"$MKL_GFORTRAN_SCALAPACK_FCFLAGS_MPI_NO_OMP\\\" \
--enable-option-checking=fatal --disable-avx2 --prefix=$PWD/installdest --disable-avx2 || { cat config.log; exit 1; } \" \
-t 2 -m 150 -n 50 -b 16 -S $SLURM -p test_project_"+stage[s]+api[a]+" -e "+projectBinary+" \
-C \" FC=mpif90 PKG_CONFIG_PATH=../../installdest/lib/pkgconfig \
--enable-option-checking=fatal || { cat config.log; exit 1; } \" ")
if (a == "new_api"):
print(" - ./ci_test_scripts/run_project_tests.sh -c \" FC=mpif90 FCFLAGS=\\\"-march=native \\\" CFLAGS=\\\"-march=native\\\" \
SCALAPACK_LDFLAGS=\\\"$MKL_GFORTRAN_SCALAPACK_LDFLAGS_MPI_NO_OMP\\\" \
SCALAPACK_FCFLAGS=\\\"$MKL_GFORTRAN_SCALAPACK_FCFLAGS_MPI_NO_OMP\\\" \
--enable-option-checking=fatal --disable-avx2 --prefix=$PWD/installdest --disable-avx2 || { cat config.log; exit 1; } \" \
-t 2 -m 150 -n 50 -b 16 -S $SLURM -p test_project_"+stage[s]+api[a]+" -e "+projectBinary+" \
-C \" FC=mpif90 PKG_CONFIG_PATH=../../installdest/lib/pkgconfig \
--enable-option-checking=fatal || { cat config.log; exit 1; } \" ")
if (a == "legacy_api"):
print(" - ./ci_test_scripts/run_project_tests.sh -c \" FC=mpif90 FCFLAGS=\\\"-march=native \\\" CFLAGS=\\\"-march=native\\\" \
SCALAPACK_LDFLAGS=\\\"$MKL_GFORTRAN_SCALAPACK_LDFLAGS_MPI_NO_OMP\\\" \
SCALAPACK_FCFLAGS=\\\"$MKL_GFORTRAN_SCALAPACK_FCFLAGS_MPI_NO_OMP\\\" \
--enable-option-checking=fatal --enable-legacy-interface --disable-avx2 --prefix=$PWD/installdest --disable-avx2 || { cat config.log; exit 1; } \" \
-t 2 -m 150 -n 50 -b 16 -S $SLURM -p test_project_"+stage[s]+api[a]+" -e "+projectBinary+" \
-C \" FC=mpif90 PKG_CONFIG_PATH=../../installdest/lib/pkgconfig \
--enable-option-checking=fatal || { cat config.log; exit 1; } \" ")
print("\n\n")
print("#The tests follow here")
......@@ -529,6 +548,7 @@ address_sanitize_flag = {
"no-address-sanitize" : "no-address-sanitize",
}
#matrix_size = {
# "small" : "150",
# "medium" : "1500",
......
......@@ -27,7 +27,7 @@ AM_SILENT_RULES([yes])
# by the current interface, as they are ABI compatible (e.g. only new symbols
# were added by the new interface)
#
AC_SUBST([ELPA_SO_VERSION], [13:0:0])
AC_SUBST([ELPA_SO_VERSION], [14:0:0])
# AC_DEFINE_SUBST(NAME, VALUE, DESCRIPTION)
# -----------------------------------------
......
......@@ -96,7 +96,6 @@ module elpa1
public :: elpa_solve_evp_real_1stage_double !< Driver routine for real double-precision 1-stage eigenvalue problem
public :: solve_evp_real_1stage !< Driver routine for real double-precision eigenvalue problem
public :: solve_evp_real_1stage_double !< Driver routine for real double-precision eigenvalue problem
#ifdef WANT_SINGLE_PRECISION_REAL
public :: solve_evp_real_1stage_single !< Driver routine for real single-precision eigenvalue problem
......@@ -104,7 +103,6 @@ module elpa1
#endif
public :: elpa_solve_evp_complex_1stage_double !< Driver routine for complex 1-stage eigenvalue problem
public :: solve_evp_complex_1stage !< Driver routine for complex double-precision eigenvalue problem
public :: solve_evp_complex_1stage_double !< Driver routine for complex double-precision eigenvalue problem
#ifdef WANT_SINGLE_PRECISION_COMPLEX
public :: solve_evp_complex_1stage_single !< Driver routine for complex single-precision eigenvalue problem
......@@ -114,22 +112,16 @@ module elpa1
! imported from elpa1_auxilliary
public :: elpa_mult_at_b_real_double !< Multiply double-precision real matrices A**T * B
public :: mult_at_b_real !< old, deprecated interface to multiply double-precision real matrices A**T * B DO NOT USE
public :: elpa_mult_ah_b_complex_double !< Multiply double-precision complex matrices A**H * B
public :: mult_ah_b_complex !< old, deprecated interface to multiply double-preicion complex matrices A**H * B DO NOT USE
public :: elpa_invert_trm_real_double !< Invert double-precision real triangular matrix
public :: invert_trm_real !< old, deprecated interface to invert double-precision real triangular matrix DO NOT USE
public :: elpa_invert_trm_complex_double !< Invert double-precision complex triangular matrix
public :: invert_trm_complex !< old, deprecated interface to invert double-precision complex triangular matrix DO NOT USE
public :: elpa_cholesky_real_double !< Cholesky factorization of a double-precision real matrix
public :: cholesky_real !< old, deprecated interface to do Cholesky factorization of a double-precision real matrix DO NOT USE
public :: elpa_cholesky_complex_double !< Cholesky factorization of a double-precision complex matrix
public :: cholesky_complex !< old, deprecated interface to do Cholesky factorization of a double-precision complex matrix DO NOT USE
public :: elpa_solve_tridi_double !< Solve a double-precision tridiagonal eigensystem with divide and conquer method
......@@ -155,10 +147,6 @@ module elpa1
logical, public :: elpa_print_times = .false. !< Set elpa_print_times to .true. for explicit timing outputs
interface solve_evp_real_1stage
module procedure elpa_solve_evp_real_1stage_double
end interface
!> \brief elpa_solve_evp_real_1stage_double: Fortran function to solve the real eigenvalue problem with 1-stage solver. This is called by "elpa_solve_evp_real"
!>
! Parameters
......@@ -200,46 +188,6 @@ module elpa1
module procedure elpa_solve_evp_real_1stage_double
end interface
!> \brief solve_evp_complex_1stage: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
!>
!> \details
!> The interface and variable definition is the same as in "elpa_solve_evp_complex_1stage_double"
! Parameters
!
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed.
!> The smallest nev eigenvalues/eigenvectors are calculated.
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols distributed number of matrix columns
!>
!> \param mpi_comm_rows MPI-Communicator for rows
!> \param mpi_comm_cols MPI-Communicator for columns
!>
!> \result success
interface solve_evp_complex_1stage
module procedure elpa_solve_evp_complex_1stage_double
end interface
!> \brief solve_evp_complex_1stage_double: Fortran function to solve the complex eigenvalue problem with 1-stage solver. This is called by "elpa_solve_evp_complex"
!>
! Parameters
......
......@@ -71,52 +71,6 @@ module elpa2
public :: elpa_solve_evp_real_2stage_double !< Driver routine for real double-precision 2-stage eigenvalue problem
public :: elpa_solve_evp_complex_2stage_double !< Driver routine for complex double-precision 2-stage eigenvalue problem
!-------------------------------------------------------------------------------
!> \brief solve_evp_real_2stage: Old, deprecated interface for elpa_solve_evp_real_2stage_double
!>
!> Parameters
!>
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols local columns of matrix a and q
!>
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param mpi_comm_all MPI communicator for the total processor set
!>
!> \param THIS_REAL_ELPA_KERNEL_API (optional) specify used ELPA2 kernel via API
!>
!> \param useQR (optional) use QR decomposition
!> \param useGPU (optional) decide whether to use GPUs or not
!>
!> \result success logical, false if error occured
!-------------------------------------------------------------------------------
interface solve_evp_real_2stage
module procedure solve_evp_real_2stage_double
end interface
!-------------------------------------------------------------------------------
!> \brief elpa_solve_evp_real_2stage_double: Fortran function to solve the real double-precision eigenvalue problem with a 2 stage approach. This is called by "elpa_solve_evp_real_double"
!>
......@@ -161,49 +115,6 @@ module elpa2
module procedure solve_evp_real_2stage_double
end interface
!-------------------------------------------------------------------------------
!> \brief solve_evp_complex_2stage: Old, deprecated interface for elpa_solve_evp_complex_2stage_double
!>
!> Parameters
!>
!> \param na Order of matrix a
!>
!> \param nev Number of eigenvalues needed
!>
!> \param a(lda,matrixCols) Distributed matrix for which eigenvalues are to be computed.
!> Distribution is like in Scalapack.
!> The full matrix must be set (not only one half like in scalapack).
!> Destroyed on exit (upper and lower half).
!>
!> \param lda Leading dimension of a
!>
!> \param ev(na) On output: eigenvalues of a, every processor gets the complete set
!>
!> \param q(ldq,matrixCols) On output: Eigenvectors of a
!> Distribution is like in Scalapack.
!> Must be always dimensioned to the full size (corresponding to (na,na))
!> even if only a part of the eigenvalues is needed.
!>
!> \param ldq Leading dimension of q
!>
!> \param nblk blocksize of cyclic distribution, must be the same in both directions!
!>
!> \param matrixCols local columns of matrix a and q
!>
!> \param mpi_comm_rows MPI communicator for rows
!> \param mpi_comm_cols MPI communicator for columns
!> \param mpi_comm_all MPI communicator for the total processor set
!>
!> \param THIS_REAL_ELPA_KERNEL_API (optional) specify used ELPA2 kernel via API
!>
!> \param useGPU (optional) decide whether to use GPUs or not
!>
!> \result success logical, false if error occured
!-------------------------------------------------------------------------------
interface solve_evp_complex_2stage
module procedure solve_evp_complex_2stage_double
end interface
!-------------------------------------------------------------------------------
!> \brief elpa_solve_evp_complex_2stage_double: Fortran function to solve the complex double-precision eigenvalue problem with a 2 stage approach. This is called by "elpa_solve_evp_complex_double"
!>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment