5.69 KB
Newer Older
1 2
# [Eigenvalue SoLvers for Petaflop-Applications (ELPA)] (

Andreas Marek's avatar
Andreas Marek committed
## Current Release ##

Andreas Marek's avatar
Andreas Marek committed
The current release is ELPA 2019.05.002 The current supported API version
is 20190501. This release supports the earliest API version 20170403.

8 9 10 11
The old, obsolete legacy API will be deprecated in the future !
Allready now, all new features of ELPA are only available with the new API. Thus, there
is no reason to keep the legacy API arround for too long.

The release ELPA 2018.11.001 was the last release, where the legacy API has been
enabled by default (and can be disabled at build time).
With release ELPA 2019.05.001 the legacy API is disabled by default, however,
15 16 17 18
can be still switched on at build time.
Most likely with the release ELPA 2019.11.001 the legacy API will be deprecated and
not supported anymore.

Andreas Marek's avatar
Andreas Marek committed
19 20 21 22

Andreas Marek's avatar
Andreas Marek committed

25 26 27
[![License: LGPL v3][licence-badge]](LICENSE)

Andreas Marek's avatar
Andreas Marek committed

Andreas Marek's avatar
Andreas Marek committed
## About *ELPA* ##
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54

The computation of selected or all eigenvalues and eigenvectors of a symmetric
(Hermitian) matrix has high relevance for various scientific disciplines.
For the calculation of a significant part of the eigensystem typically direct
eigensolvers are used. For large problems, the eigensystem calculations with
existing solvers can become the computational bottleneck.

As a consequence, the *ELPA* project was initiated with the aim to develop and
implement an efficient eigenvalue solver for petaflop applications, supported
by the German Federal Government, through BMBF Grant 01IH08007, from
Dec 2008 to Nov 2011.

The challenging task has been addressed through a multi-disciplinary consortium
of partners with complementary skills in different areas.

The *ELPA* library was originally created by the *ELPA* consortium,
consisting of the following organizations:

- Max Planck Computing and Data Facility (MPCDF), fomerly known as
  Rechenzentrum Garching der Max-Planck-Gesellschaft (RZG),
- Bergische Universität Wuppertal, Lehrstuhl für angewandte
- Technische Universität München, Lehrstuhl für Informatik mit
  Schwerpunkt Wissenschaftliches Rechnen ,
- Fritz-Haber-Institut, Berlin, Abt. Theorie,
- Max-Plack-Institut für Mathematik in den Naturwissenschaften,
56 57 58 59 60 61 62 63 64 65 66 67 68
  Leipzig, Abt. Komplexe Strukutren in Biologie und Kognition,
- IBM Deutschland GmbH

*ELPA* is distributed under the terms of version 3 of the license of the
GNU Lesser General Public License as published by the Free Software Foundation.

## Obtaining *ELPA*

There exist several ways to obtain the *ELPA* library either as sources or pre-compiled packages:

- official release tar-gz sources from the [*ELPA* webpage] (
- from the [*ELPA* git repository] (
- as packaged software for several Linux distributions (e.g. Debian, Fedora, OpenSuse)
70 71 72 73 74 75 76 77 78 79

## Terms of usage

Your are free to obtain and use the *ELPA* library, as long as you respect the terms
of version 3 of the license of the GNU Lesser General Public License.

No other conditions have to be met.

Nonetheless, we are grateful if you cite the following publications:

Andreas Marek's avatar
Andreas Marek committed
80 81
  If you use ELPA in general:

82 83 84 85 86 87 88 89 90 91 92 93 94
  T. Auckenthaler, V. Blum, H.-J. Bungartz, T. Huckle, R. Johanni,
  L. Kr\"amer, B. Lang, H. Lederer, and P. R. Willems,
  "Parallel solution of partial symmetric eigenvalue problems from
  electronic structure calculations",
  Parallel Computing 37, 783-794 (2011).

  Marek, A.; Blum, V.; Johanni, R.; Havu, V.; Lang, B.; Auckenthaler,
  T.; Heinecke, A.; Bungartz, H.-J.; Lederer, H.
  "The ELPA library: scalable parallel eigenvalue solutions for electronic
  structure theory and computational science",
  Journal of Physics Condensed Matter, 26 (2014)
Andreas Marek's avatar
Andreas Marek committed
96 97
  If you use the GPU version of ELPA:

98 99 100 101 102 103
  Kus, P; Marek, A.; Lederer, H.
  "GPU Optimization of Large-Scale Eigenvalue Solver",
  In: Radu F., Kumar K., Berre I., Nordbotten J., Pop I. (eds) 
  Numerical Mathematics and Advanced Applications ENUMATH 2017. ENUMATH 2017. 
  Lecture Notes in Computational Science and Engineering, vol 126. Springer, Cham
Andreas Marek's avatar
Andreas Marek committed
104 105 106 107
  If you use the new API and/or autotuning:
  Kus; P.; Marek, A.; Koecher, S. S.; Kowalski H.-H.; Carbogno, Ch.; Scheurer, Ch.; Reuter, K.; Scheffler, M.; Lederer, H.
  "Optimizations of the Eigenvaluesolvers in the ELPA Library",
Andreas Marek's avatar
Andreas Marek committed
  Parallel Computing 85, 167-177 (2019)
Andreas Marek's avatar
Andreas Marek committed
110 111 112

## Installation of the *ELPA* library

*ELPA* is shipped with a standard autotools automake installation infrastructure.
114 115 116 117 118 119 120 121 122 123 124 125 126 127
Some other libraries are needed to install *ELPA* (the details depend on how you
configure *ELPA*):

  - Basic Linear Algebra Subroutines (BLAS)
  - Lapack routines
  - Basic Linear Algebra Communication Subroutines (BLACS)
  - Scalapack routines
  - a working MPI library

Please refer to the **INSTALL document** on details of the installation process and
the possible configure options.

## Using *ELPA*

Andreas Marek's avatar
Andreas Marek committed
Please have a look at the "**USERS_GUIDE**" file, to get a documentation or at the [online]
Andreas Marek's avatar
Andreas Marek committed
( doxygen
130 131 132 133 134 135 136
documentation, where you find the definition of the interfaces.

## Contributing to *ELPA*

It has been, and is, a tremendous effort to develop and maintain the
*ELPA* library. A lot of things can still be done, but our man-power is limited.

Thus every effort and help to improve the *ELPA* library is highly appreciated.
138 139 140
For details please see the CONTRIBUTING document.