Update documentation INSTALL.md and README.md

parent 88a458c5
How to install ELPA
===================
First of all, if you do not want to build ELPA yourself, and you run Linux,
it is worth having a look at the ELPA webpage http://elpa.rzg.mpg.de
and/or the repositories of your Linux distribution: there exist
pre-build packages for a number of Linux distributions like Fedora,
Debian, and OpenSuse. More, will hopefully follow in the future.
If you want to build (or have to since no packages are available) ELPA yourself,
please note that ELPA is shipped with a typical "configure" and "make"
autotools procedure. This is the only supported way how to build and install ELPA.
If you obtained ELPA from the official git repository, you will not find
the needed configure script! Please look at the "INSTALL_FROM_GIT_VERSION" file
for the documentation how to proceed.
If --- against our recommendations --- you do not want to install ELPA as
library, or you do not want to use the autotools you will have to find a solution
by yourself. This approach is NOT supported by us in any way and we strongly discourage
this approach.
If you do this, because you want to include ELPA within your code project (i.e. use
ELPA not as an external library but "assimilate" it in your projects ),
please distribute then _ALL_ files (as obtained from an official release tar-ball)
of ELPA with your code. Note, however, that including the ELPA source files in your
code projects (and not using ELPA as external library) is only allowed under the terms of
the LGPL license if your code is ALSO under LGPL.
(A): Installing ELPA as library with configure
===================================================
The configure installation is best done in four steps
1) run configure:
Check the available options with "configure --help".
ELPA is shipped with several different versions of the
elpa2-kernel, each is optimized and tuned for a different
architecture.
1.1) Choice of MPI compiler (wrappers)
It is mandatory that the C and C++ parts are compiled with the
GNU C, C++ compilers. Thus, all ELPA test programs which are written
in C must be compiled and linked with the MPI compiler (wrapper) which uses
the GNU C compilers.
The Fortran parts of ELPA can be compiled with any Fortran compiler of
your choice. It is for example quite common to compile the Fortran part
with the Intel Fortran compiler but the C,C++ parts must be compiled with
the GNU compilers.
Thus special care has to be taken that the correct MPI compiler (wrappers)
are found by the autotools! The configure script tries to find the correct
wrappers automatically, but sometimes it will fail.
In these cases it is necessary to set these compilers by hand:
../configure FC=fortran_compiler_wrapper_of_your_choice CC=gnu_compiler_wrapper
will tell autotools which wrappers to take.
1.2) Choice of ELPA2 kernels
With the release of ELPA (2014.06 or newer) it is _not_
mandatory any more to define the (real and complex) kernels
at build time. The configure procedure will build all the
kernels which can be used on the build system. The choice of
the kernels is now a run-time option. This is the most
convenient and also recommended way. It is intended to augment
this with an auto-tuning feature.
Nevertheless, one can still define at build-time _one_
specific kernel (for the real and the complex case each).
Then, ELPA is configured only with this real (and complex)
kernel, and all run-time checking is disabled. Have a look
at the "configure --help" messages and please refer to the
file "./src/elpa2_kernels/README_elpa2_kernels.txt".
1.3) Setting up Blacs/Scalapack
The configure script tries to auto-detect an installed Blacs/Scalapack
library. If this is successful, you do not have to specify anything
in this regard. However, this will fail, if you do not use Netlib
Blacs/Scalapack but vendor specific implementations (e.g. Intel's MKL
library or the implementation of Cray and so forth...).
In this case, please point to your Blacs/Scalapack installation and the
link-line with the variables "SCALAPACK_LDFLAGS" and "SCALAPACK_FCFLAGS".
"SCALAPACK_LDFLAGS" should contain the correct link-line for your
Blacs/Scalapack installation and "SCALAPACK_FCFLAGS" the include path
and any other flags you need at compile time.
For example with Intel's MKL 11.2 library one might have to set
SCALAPACK_LDFLAGS="-L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 \
-lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 \
-lpthread -lm -Wl,-rpath,$MKL_HOME/lib/intel64"
and
SCALAPACK_FCFLAGS="-I$MKLROOT/include/intel64/lp64"
Note, that the actual MKL linkline depends on the installed MKL version.
If your libraries are in non-standard locations, you can think about
specifying a runtime library search path ("rpath") in the link-line,
otherwise it will be necessary to update the LD_LIBRARY_PATH environment
variable.
In any case, auto-detection or manual specification of Blacs/Scalapack,
the configure procedure will check whether Blacs/Scalapack is available
at build-time and try to link with it.
1.4) Setting optimizations
Please set the optimisation that you prefer with the
variable "FCFLAGS", "CFLAGS", and "CXXFLAGS",
please see "./src/elpa2_kernels/README_elpa2_kernels.txt".
Note that _NO_ compiler optimization flags are set automatically. It
is always adviced to set them by e.g.:
./configure CFLAGS="-O2" CXXFLAGS="-O2" FCFLAGS="-O2"
Note that it is mandatory to set optimization flags for C, C++, and Fortran
since ELPA uses source files and compile steps from all these languages.
Also note that building of the SSE and AVX kernels, requires
compilation with the GNU C Compiler (gcc). It is advised to
set also CFLAGS="-march=native" CXXFLAGS="-march=native",
since otherwise the GNU compiler does not support AVX, even
if the hardware does. If you already included "-mAVX" in the
flags, you can skip "-march=native".
If you want to use the newer AVX2 instructions, assuming they are supported on
your hardware, please set CFLAGS="-march=avx2 -mfma" and CXXFLAGS="-march=avx2 -mfma".
Setting the optimization flags for the AVX kernels can be a hassle. If AVX
kernels are build for your system, you can set the configure option
"--with-avx-optimizations=yes". This will automatically set a few compiler
optimization flags which turned out to be beneficial for AVX support.
However, it might be that on your system/compiler version etc. other flags
are the better choice. AND this does _not_ set the above mentioned flags,
which you should still set by hand:
./configure CFLAGS="-O2" CXXFLAGS="-O2" FCFLAGS="-O2"
An istallation with AVX2 and best-optimizations could thus look like this:
./configure CFLAGS="-O2 -mavx2 -mfma" CXXFLAGS="-O2 -mavx2 -mfma" FCFLAGS="-O2" --with-avx-optimization
1.5) Installation location
Set the "--prefix" flag if you wish another installation location than
the default "/usr/local/".
1.6) Hybrid OpenMP support
If you want to use the hybrid MPI/OpenMP version of ELPA please specify
"--enable-openmp". Note that the ELPA library will then contain a "_mt" in
it's name to indicate multi threading support.
1.7) Other
Note, that at the moment we do not officially support "cross compilation"
although it should work.
2) run "make"
3) run "make check"
A simple test of ELPA is done. At the moment the usage of "mpiexec"
is required. If this is not possible at your system, you can run the
binaries
elpa1_test_real
elpa2_test_real
elpa1_test_complex
elpa2_test_complex
elpa2_test_complex_default_kernel
elpa2_test_complex_choose_kernel_with_api
elpa2_test_real_default_kernel
elpa2_test_real_choose_kernel_with_api
yourself. At the moment the tests check whether the residual and the
orthogonality of the found eigenvectors are lower than a threshold of
5e-12. If this test fails, or if you believe the threshold should be
even lower, please talk to us. Furthermore, your run-time choice of
ELPA kernels is tested. This is intended as a help to get used to this
new feature. With the same thought in mind, a binary "elpa2_print_kernels"
is provided, which is rather self-explanatory.
Also some of the above mentioned tests are provided as C source files.
These should demonstrate, how to call ELPA from a C program (i.e. which headers to include
and call the ELPA functions). They are NOT intended as a copy and paste solution!
4) run "make install"
Note that a pkg-config file for ELPA is produced. You should then be
able to link the ELPA library to your own applications.
B) Installing ELPA without the autotools procedure
===================================================
We do not support installation without the autotools anymore!
If you think you need this, sorry, but then you are on your own.
How to use ELPA
===============
Using ELPA should be quite simple. It is similar to ScalaPack but the API
is different. See the examples in the directory "./test". There it is shown how
to evoke ELPA from a Fortran code.
If you installed ELPA, a pkg-config file is produced which will tell you how to
link your own program with ELPA.
#Installation guide#
##Preamle##
This file provides documentation on how to build the *ELPA* library in **version ELPA-2016.05.001**.
Although most of the documentation is generic to any *ELPA* release, some configure options
described in this document might be specific to the above mentioned version of *ELPA*.
##How to install ELPA##
First of all, if you do not want to build *ELPA* yourself, and you run Linux,
it is worth having a look at the [*ELPA* webpage*] (http://elpa.mpcdf.mpg.de)
and/or the repositories of your Linux distribution: there exist
pre-build packages for a number of Linux distributions like Fedora,
Debian, and OpenSuse. More, will hopefully follow in the future.
If you want to build (or have to since no packages are available) *ELPA* yourself,
please note that *ELPA* is shipped with a typical "configure" and "make"
autotools procedure. This is the **only supported way** how to build and install *ELPA*.
If you obtained *ELPA* from the official git repository, you will not find
the needed configure script! Please look at the "**INSTALL_FROM_GIT_VERSION**" file
for the documentation how to proceed.
##(A): Installing ELPA as library with configure##
*ELPA* can be installed with the build steps
- configure
- make
- make check
- make install
Please look at configure --help for all available options.
### Setting of MPI compiler and libraries ###
In the standard case *ELPA* need a MPI compiler and MPI libraries. The configure script
will try to set this by itself. If, however, on the build system the compiler wrapper
cannot automatically found, it is recommended to set it by hand with a variable, e.g.
configure FC=mpif90
### Hybrid MPI/OpenMP library build ###
The *ELPA* library can be build to support hybrid MPI/OpenMP support. To do this the
"--enable-openmp" configure option should be said. If also a hybrid version of *ELPA*
is wanted, it is recommended to build to version of *ELPA*: one with pure MPI and
a hybrid version. They can be both installed in the same path, since the have different
so library names.
### Standard libraries in default installation paths###
In order to build the *ELPA* library, some (depending on the settings during the
configure step, see below) libraries are needed.
Typically these are:
- Basic Linear Algebra Subroutines (BLAS)
- Lapack routines
- Basic Linear Algebra Communication Subroutines (BLACS)
- Scalapack routines
- a working MPI library
If the needed library are installed on the build system in standard paths (e.g. /usr/lib64)
the in most cases the *ELPA* configure step will recognize the needed libraries
automatically. No setting of any library paths should be necessary.
### Non standard paths or non standard libraries ###
If standard libraries are on the build system either installed in non standard paths, or
special non standard libraries (e.g. *Intel's MKL*) should be used, it might be necessary
to specify the appropriate link-line with the **SCALAPACK_LDFLAGS** and **SCALAPACK_FCFLAGS**
variables.
For example, due to performance reasons it might be benefical to use the *BLAS*, *BLACS*, *LAPACK*,
and *SCALAPACK* implementation from *Intel's MKL* library.
Togehter with the Intel Fortran Compiler the call to configure might then look like:
configure SCALAPACK_LDFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -Wl,-rpath,$MKL_HOME/lib/intel64" \
SCALAPACK_FCFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -I$MKL_HOME/include/intel64/lp64"
and for *INTEL MKL* togehter with *GNU GFORTRAN* :
configure SCALAPACK_LDFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_gf_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -Wl,-rpath,$MKL_HOME/lib/intel64" \
SCALAPACK_FCFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_gf_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -I$MKL_HOME/include/intel64/lp64"
Please, for the correct link-line refer to the documentation of the correspondig library. In case of *Intel's MKL* we
sugest the [Intel Math Kernel Library Link Line Advisor] (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor).
###Choice of ELPA2 compute kernels###
In the default the configure script tries to configure and build all ELPA2 compute kernels which are available for
the architecture. Then the specific kernel can be chosen at run-time via the api or an environment variable (see
the **USERS GUIDE** for details).
It this is not desired, it is possible to build *ELPA* with only one (not necessary the same) kernel for the
real and complex valued case, respectively. This can be done with the "--with-real-..-kernel-only" and
"--with-complex-..-kernel-only" configure options. For details please do a "configure --help"
### Non MPI one node shared-memory version of ELPA ###
Since release 2016.05.001 it is possible to build *ELPA* without any MPI support. This version can be used
by applications, which do not have any MPI parallelisation. To set this version, use the
"--enable-shared-memory-only" configure flag. It is strongly recommmended to also set the "--enable-openmp"
option, otherwise no parallelisation whatsoever will be present.
### Doxygen documentation ###
A doxygen documentation can be created with the "--enable-doxygen-doc" configure option
......@@ -157,7 +157,7 @@ dist_files_DATA = \
test/fortran_test_programs/test_real_with_c.F90 \
src/print_available_elpa2_kernels.F90
dist_doc_DATA = README COPYING/COPYING COPYING/gpl.txt COPYING/lgpl.txt
dist_doc_DATA = README.md INSTALL.md CONTRIBUTING.md LICENSE COPYING/COPYING COPYING/gpl.txt COPYING/lgpl.txt
# pkg-config stuff
pkgconfigdir = $(libdir)/pkgconfig
......
If you obtained ELPA via the official git repository please have
a look at the "INSTALL_FROM_GIT_VERSION" for specific instructions
In your use of ELPA, please respect the copyright restrictions
found below and in the "COPYING" directory in this repository. In a
nutshell, if you make improvements to ELPA, copyright for such
improvements remains with you, but we request that you relicense any
such improvements under the same exact terms of the (modified) LGPL v3
that we are using here. Please do not simply absorb ELPA into your own
project and then redistribute binary-only without making your exact
version of the ELPA source code (unmodified or MODIFIED) available as
well.
*** Citing:
A description of some algorithms present in ELPA can be found in:
T. Auckenthaler, V. Blum, H.-J. Bungartz, T. Huckle, R. Johanni,
L. Kr\"amer, B. Lang, H. Lederer, and P. R. Willems,
"Parallel solution of partial symmetric eigenvalue problems from
electronic structure calculations",
Parallel Computing 37, 783-794 (2011).
doi:10.1016/j.parco.2011.05.002.
Marek, A.; Blum, V.; Johanni, R.; Havu, V.; Lang, B.; Auckenthaler,
T.; Heinecke, A.; Bungartz, H.-J.; Lederer, H.
"The ELPA library: scalable parallel eigenvalue solutions for electronic
structure theory and computational science",
Journal of Physics Condensed Matter, 26 (2014)
doi:10.1088/0953-8984/26/21/213201
Please cite this paper when using ELPA. We also intend to publish an
overview description of the ELPA library as such, and ask you to
make appropriate reference to that as well, once it appears.
*** Copyright:
Copyright of the original code rests with the authors inside the ELPA
consortium. The code is distributed under the terms of the GNU Lesser General
Public License version 3 (LGPL).
Please also note the express "NO WARRANTY" disclaimers in the GPL.
Please see the file "COPYING" for details, and the files "gpl.txt" and
"lgpl.txt" for further information.
*** Using ELPA:
ELPA is designed to be compiled (Fortran) on its own, to be later
linked to your own application. In order to use ELPA, you must still
have a set of separate libraries that provide
- Basic Linear Algebra Subroutines (BLAS)
- Lapack routines
- Basic Linear Algebra Communication Subroutines (BLACS)
- Scalapack routines
- a working MPI library
Appropriate libraries can be obtained and compiled separately on many
architectures as free software. Alternatively, pre-packaged libraries
are usually available from any HPC proprietary compiler vendors.
For example, Intel's ifort compiler contains the "math kernel library"
(MKL), providing BLAS/Lapack/BLACS/Scalapack functionality. (except on
Mac OS X, where the BLACS and Scalapack part must still be obtained
and compiled separately).
A very usable general-purpose MPI library is OpenMPI (ELPA was tested
with OpenMPI 1.4.3 for example). Intel MPI seems to be a very well
performing option on Intel platforms.
Examples of how to use ELPA are included in the accompanying
test_*.f90 subroutines in the "test" directory. An example makefile
"Makefile.example" is also included as a minimal example of how to
build and link ELPA to any other piece of code. In general, however,
we suggest to use the build environment in order to install ELPA
as library to your system.
*** Structure of this repository:
As in most git repositories, also this repository contains different branches.
The branch "master" is always identical to the one representing the latest release
of ELPA. All other branches, either represent development work, or previous releases of
ELPA.
.
# [Eigenvalue SoLvers for Petaflop-Applications (ELPA)] (http://elpa.mpcdf.mpg.de)
## Current Realse ##
The cuurent release is ELPA 2015.11.001
## About *ELPA*
The computation of selected or all eigenvalues and eigenvectors of a symmetric
(Hermitian) matrix has high relevance for various scientific disciplines.
For the calculation of a significant part of the eigensystem typically direct
eigensolvers are used. For large problems, the eigensystem calculations with
existing solvers can become the computational bottleneck.
As a consequence, the *ELPA* project was initiated with the aim to develop and
implement an efficient eigenvalue solver for petaflop applications, supported
by the German Federal Government, through BMBF Grant 01IH08007, from
Dec 2008 to Nov 2011.
The challenging task has been addressed through a multi-disciplinary consortium
of partners with complementary skills in different areas.
The *ELPA* library was originally created by the *ELPA* consortium,
consisting of the following organizations:
- Max Planck Computing and Data Facility (MPCDF), fomerly known as
Rechenzentrum Garching der Max-Planck-Gesellschaft (RZG),
- Bergische Universität Wuppertal, Lehrstuhl für angewandte
Informatik,
- Technische Universität München, Lehrstuhl für Informatik mit
Schwerpunkt Wissenschaftliches Rechnen ,
- Fritz-Haber-Institut, Berlin, Abt. Theorie,
- Max-Plack-Institut für Mathematik in den Naturwissenschaftrn,
Leipzig, Abt. Komplexe Strukutren in Biologie und Kognition,
and
- IBM Deutschland GmbH
*ELPA* is distributed under the terms of version 3 of the license of the
GNU Lesser General Public License as published by the Free Software Foundation.
## Obtaining *ELPA*
There exist several ways to obtain the *ELPA* library either as sources or pre-compiled packages:
- official release tar-gz sources from the [*ELPA* webpage] (http://elpa.mpcdf.mpg.de/elpa-tar-archive)
- from the [*ELPA* git repository] (https://gitlab.mpcdf.mpg.de/elpa/elpa)
- as packaged software for several Linux distribtutions (e.g. Debian, Fedora, OpenSuse)
## Terms of usage
Your are free to obtain and use the *ELPA* library, as long as you respect the terms
of version 3 of the license of the GNU Lesser General Public License.
No other conditions have to be met.
Nonetheless, we are grateful if you cite the following publications:
T. Auckenthaler, V. Blum, H.-J. Bungartz, T. Huckle, R. Johanni,
L. Kr\"amer, B. Lang, H. Lederer, and P. R. Willems,
"Parallel solution of partial symmetric eigenvalue problems from
electronic structure calculations",
Parallel Computing 37, 783-794 (2011).
doi:10.1016/j.parco.2011.05.002.
Marek, A.; Blum, V.; Johanni, R.; Havu, V.; Lang, B.; Auckenthaler,
T.; Heinecke, A.; Bungartz, H.-J.; Lederer, H.
"The ELPA library: scalable parallel eigenvalue solutions for electronic
structure theory and computational science",
Journal of Physics Condensed Matter, 26 (2014)
doi:10.1088/0953-8984/26/21/213201
## Installation of the *ELPA* library
*ELPA* is shipped with a standard autotools automake installation infrastruture.
Some other libraries are needed to install *ELPA* (the details depend on how you
configure *ELPA*):
- Basic Linear Algebra Subroutines (BLAS)
- Lapack routines
- Basic Linear Algebra Communication Subroutines (BLACS)
- Scalapack routines
- a working MPI library
Please refer to the **INSTALL document** on details of the installation process and
the possible configure options.
## Using *ELPA*
Please have a look at the "**Users Guide**" file, to get a documentation or at the [online]
(http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2015.11.001/html/index.html) doygen
documentation, where you find the definition of the interfaces.
## Contributing to *ELPA*
It has been, and is, a tremendous effort to develop and maintain the
*ELPA* library. A lot of things can still be done, but our man-power is limited.
Thus every effort and help to improve the *ELPA* library is higly appreciated.
For details please see the CONTRIBUTING document.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment