Commit 1bd66e66 authored by Andreas Marek's avatar Andreas Marek

Merge branch 'fix_typos' into 'master_pre_stage'

Fix some typos in the documentation and examples

See merge request !6
parents 707c46d1 c6c3442e
......@@ -8,7 +8,7 @@ For recommendations and suggestions, a simple email to us (*elpa-libray at mpcdf
If you would like to share with us your improvements, we suggest the following ways:
1. If you use a public accessible git repository, please send us a merge request. This is the preferred way
1. If you use a public accessible git repository, please send us a merge request. This is the preferred way.
2. An email with a patch, will also be ok. Please use *elpa-library at mpcdf.mpg.de*
Thank you for supporting *ELPA*!
......
......@@ -4,7 +4,7 @@
This file provides documentation on how to build the *ELPA* library in **version ELPA-2017.11.001**.
With release of **version ELPA-2017.05.001** the build process has been significantly simplified,
which makes it easier to install the *ELPA* library
which makes it easier to install the *ELPA* library.
## How to install *ELPA *##
......@@ -80,7 +80,7 @@ An excerpt of the most important (*ELPA* specific) options reads as follows:
--with-gpu-support-only Compile and always use the GPU version
We recommend that you do not build ELPA in it`s main directory but that you use it
We recommend that you do not build ELPA in its main directory but that you use it
in a sub-directory:
mkdir build
......@@ -99,7 +99,7 @@ For details, please have a look at the documentation for the compilers of your c
### Choice of building with or without MPI ###
It is possible to build the *ELPA* library with or without MPI support
It is possible to build the *ELPA* library with or without MPI support.
Normally *ELPA* is build with MPI, in order to speed-up calculations by using distributed
parallelisation over several nodes. This is, however, only reasonably if the programs
......@@ -123,7 +123,7 @@ cannot automatically found, it is recommended to set it by hand with a variable,
configure FC=mpif90
Please note, thate setting a C MPI-compiler is NOT necessary, and in most case even harmful.
Please note, that setting a C MPI-compiler is NOT necessary, and in most cases even harmful.
In some cases, on your system different MPI libraries and compilers are installed. Then it might happen
that during the build step an error like "no module mpi" or "cannot open module mpi" is given.
......@@ -151,8 +151,8 @@ configure FC=gfortran --with-mpi=0
DO NOT specify a MPI compiler here!
Note, that the the installed *ELPA* library files will be suffixed with
"_onenode", in order to descriminate this build from possible ones with MPI.
Note, that the installed *ELPA* library files will be suffixed with
"_onenode", in order to discriminate this build from possible ones with MPI.
Please continue reading at "C) Enabling GPU support"
......@@ -185,14 +185,14 @@ shared-memory parallization, since *ELPA* is build without MPI support (see B).
To enable OpenMP support, add
"--enable-openmp"
--enable-openmp
as configure option.
Note that as in case with/without MPI, you can also build and install versions of *ELPA*
with/without OpenMP support at the same time.
However, the GPU choice at runtime, is not compatible with OpenMP support
However, the GPU choice at runtime is not compatible with OpenMP support.
Please continue reading at "E) Standard libraries in default installation paths".
......@@ -229,14 +229,14 @@ variables.
For example, due to performance reasons it might be benefical to use the *BLAS*, *BLACS*, *LAPACK*,
and *SCALAPACK* implementation from *Intel's MKL* library.
Togehter with the Intel Fortran Compiler the call to configure might then look like:
Together with the Intel Fortran Compiler the call to configure might then look like:
configure SCALAPACK_LDFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -Wl,-rpath,$MKL_HOME/lib/intel64" \
SCALAPACK_FCFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -I$MKL_HOME/include/intel64/lp64"
and for *INTEL MKL* togehter with *GNU GFORTRAN* :
and for *INTEL MKL* together with *GNU GFORTRAN* :
configure SCALAPACK_LDFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_gf_lp64 -lmkl_sequential \
-lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread -lm -Wl,-rpath,$MKL_HOME/lib/intel64" \
......@@ -245,7 +245,7 @@ configure SCALAPACK_LDFLAGS="-L$MKL_HOME/lib/intel64 -lmkl_scalapack_lp64 -lmkl_
Please, for the correct link-line refer to the documentation of the correspondig library. In case of *Intel's MKL* we
sugest the [Intel Math Kernel Library Link Line Advisor] (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor).
suggest the [Intel Math Kernel Library Link Line Advisor] (https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor).
### G) Choice of ELPA2 compute kernels ###
......@@ -281,32 +281,35 @@ with GNU compiler for the C part.
1. Building with Intel Fortran compiler and GNU C compiler:
Remarks: - you have to know the name of the Intel Fortran compiler wrapper
- you do not have to specify a C compiler (with CC); GNU C compiler is recognized automatically
- you should specify compiler flags for Intel Fortran compiler; in the example only "-O3 -xAVX2" is set
- you should be carefull with the CFLAGS. The example shows typical flags
Remarks:
- you have to know the name of the Intel Fortran compiler wrapper
- you do not have to specify a C compiler (with CC); GNU C compiler is recognized automatically
- you should specify compiler flags for Intel Fortran compiler; in the example only "-O3 -xAVX2" is set
- you should be careful with the CFLAGS, the example shows typical flags
FC=mpi_wrapper_for_intel_Fortran_compiler CC=mpi_wrapper_for_gnu_C_compiler ./configure FCFLAGS="-O3 -xAVX2" CFLAGS="-O3 -march=native -mavx2 -mfma -funsafe-loop-optimizations -funsafe-math-optimizations -ftree-vect-loop-version -ftree-vectorize" --enable-option-checking=fatal SCALAPACK_LDFLAGS="L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread " SCALAPACK_FCFLAGS="-I$MKL_HOME/include/intel64/lp64"
FC=mpi_wrapper_for_intel_Fortran_compiler CC=mpi_wrapper_for_gnu_C_compiler ./configure FCFLAGS="-O3 -xAVX2" CFLAGS="-O3 -march=native -mavx2 -mfma -funsafe-loop-optimizations -funsafe-math-optimizations -ftree-vect-loop-version -ftree-vectorize" --enable-option-checking=fatal SCALAPACK_LDFLAGS="-L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread " SCALAPACK_FCFLAGS="-I$MKL_HOME/include/intel64/lp64"
2. Building with GNU Fortran compiler and GNU C compiler:
Remarks: - you have to know the name of the GNU Fortran compiler wrapper
- you DO have to specify a C compiler (with CC); GNU C compiler is recognized automatically
- you should specify compiler flags for GNU Fortran compiler; in the example only "-O3 -march=native -mavx2 -mfma" is set
- you should be carefull with the CFLAGS. The example shows typical flags
Remarks:
- you have to know the name of the GNU Fortran compiler wrapper
- you DO have to specify a C compiler (with CC); GNU C compiler is recognized automatically
- you should specify compiler flags for GNU Fortran compiler; in the example only "-O3 -march=native -mavx2 -mfma" is set
- you should be careful with the CFLAGS, the example shows typical flags
FC=mpi_wrapper_for_gnu_Fortran_compiler CC=mpi_wrapper_for_gnu_C_compiler ./configure FCFLAGS="-O3 -march=native -mavx2 -mfma" CFLAGS="-O3 -march=native -mavx2 -mfma -funsafe-loop-optimizations -funsafe-math-optimizations -ftree-vect-loop-version -ftree-vectorize" --enable-option-checking=fatal SCALAPACK_LDFLAGS="L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_gf_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread " SCALAPACK_FCFLAGS="-I$MKL_HOME/include/intel64/lp64"
FC=mpi_wrapper_for_gnu_Fortran_compiler CC=mpi_wrapper_for_gnu_C_compiler ./configure FCFLAGS="-O3 -march=native -mavx2 -mfma" CFLAGS="-O3 -march=native -mavx2 -mfma -funsafe-loop-optimizations -funsafe-math-optimizations -ftree-vect-loop-version -ftree-vectorize" --enable-option-checking=fatal SCALAPACK_LDFLAGS="-L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_gf_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread " SCALAPACK_FCFLAGS="-I$MKL_HOME/include/intel64/lp64"
2. Building with Intel Fortran compiler and Intel C compiler:
Remarks: - you have to know the name of the Intel Fortran compiler wrapper
- you have to specify the Intel C compiler
- you should specify compiler flags for Intel Fortran compiler; in the example only "-O3 -xAVX2" is set
- you should be carefull with the CFLAGS. The example shows typical flags
Remarks:
- you have to know the name of the Intel Fortran compiler wrapper
- you have to specify the Intel C compiler
- you should specify compiler flags for Intel Fortran compiler; in the example only "-O3 -xAVX2" is set
- you should be careful with the CFLAGS, the example shows typical flags
FC=mpi_wrapper_for_intel_Fortran_compiler CC=mpi_wrapper_for_intel_C_compiler ./configure FCFLAGS="-O3 -xAVX2" CFLAGS="-O3 -xAVX2" --enable-option-checking=fatal SCALAPACK_LDFLAGS="L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread " SCALAPACK_FCFLAGS="-I$MKL_HOME/include/intel64/lp64"
FC=mpi_wrapper_for_intel_Fortran_compiler CC=mpi_wrapper_for_intel_C_compiler ./configure FCFLAGS="-O3 -xAVX2" CFLAGS="-O3 -xAVX2" --enable-option-checking=fatal SCALAPACK_LDFLAGS="-L$MKLROOT/lib/intel64 -lmkl_scalapack_lp64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lmkl_blacs_intelmpi_lp64 -lpthread " SCALAPACK_FCFLAGS="-I$MKL_HOME/include/intel64/lp64"
......
......@@ -12,7 +12,7 @@ features/improvements for ELPA, and a rebuild of the autotools scripts
will be necessary anyway.
Thus please run "autogen.sh" after your changes, in order to build the
autotoos scripts. Note that autoconf version >= 2.69 is needed for
autotools scripts. Note that autoconf version >= 2.69 is needed for
ELPA.
After this step, please proceed as written in the "INSTALL" file.
......
## A list of known (and hopefully soon solved) issues of *ELPA* ##
For more details and recent updates please visit the online [issue system] (https://gitlab.mpcdf.mpg.de/elpa/elpa/issues)
Issues which are not mentioned in a newer release are (considered as) solved
Issues which are not mentioned in a newer release are (considered as) solved.
### ELPA 2017.11.001 release ###
- the elpa autotune print functions cannot print at the moment
......
......@@ -60,7 +60,7 @@ https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
The state of release 2016.11.001.pre defines this interface
- 9
NO incompatible API changes w.r.t. to the previous version. The interface of
NO incompatible API changes w.r.t. the previous version. The interface of
the previous version 2016.11.001.pre has been marked as "legacy", although it
is fully available and supported.
......@@ -70,12 +70,12 @@ https://www.gnu.org/software/libtool/manual/html_node/Updating-version-info.html
The state of release 2017.05.001 defines this interface
- 10
NO incompatible API changes w.r.t. to the previous version. Only some functions have
NO incompatible API changes w.r.t. the previous version. Only some functions have
been added. The state of release 2017.05.001 defines this interface
- 11
Incompatible API changes w.r.t. to the previous version (only in the so called
"legacy interface", since as anounced some deprecated function aliases have been
Incompatible API changes w.r.t. the previous version (only in the so called
"legacy interface", since as announced some deprecated function aliases have been
removed). For the current interface all changes since 2017.05.001 are
compatible, since only some functions have been added.
The state of release 2017.11.001.(rc1) defines this interface
......@@ -9,7 +9,7 @@ is 20171201. This release supports the earliest API version 20170403.
status](https://gitlab.mpcdf.mpg.de/elpa/elpa/badges/master/build.svg)](https://gitlab.mpcdf.mpg.de/elpa/elpa/commits/master)
[![Code
coverage](https://gitlab.mpcdf.mpg.de/elpa/badges/master/coverage.svg)](http://elpa.pages.mpcdf.de/elpa/coverage_summary
coverage](https://gitlab.mpcdf.mpg.de/elpa/badges/master/coverage.svg)](http://elpa.pages.mpcdf.de/elpa/coverage_summary)
[![License: LGPL v3][license-badge]](LICENSE)
[license-badge]: https://img.shields.io/badge/License-LGPL%20v3-blue.svg
......@@ -54,7 +54,7 @@ There exist several ways to obtain the *ELPA* library either as sources or pre-c
- official release tar-gz sources from the [*ELPA* webpage] (http://elpa.mpcdf.mpg.de/elpa-tar-archive)
- from the [*ELPA* git repository] (https://gitlab.mpcdf.mpg.de/elpa/elpa)
- as packaged software for several Linux distribtutions (e.g. Debian, Fedora, OpenSuse)
- as packaged software for several Linux distributions (e.g. Debian, Fedora, OpenSuse)
## Terms of usage
......@@ -81,7 +81,7 @@ Nonetheless, we are grateful if you cite the following publications:
## Installation of the *ELPA* library
*ELPA* is shipped with a standard autotools automake installation infrastruture.
*ELPA* is shipped with a standard autotools automake installation infrastructure.
Some other libraries are needed to install *ELPA* (the details depend on how you
configure *ELPA*):
......@@ -97,7 +97,7 @@ the possible configure options.
## Using *ELPA*
Please have a look at the "**USERS_GUIDE**" file, to get a documentation or at the [online]
(http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2016.05.003/html/index.html) doygen
(http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2017.11.001/html/index.html) doxygen
documentation, where you find the definition of the interfaces.
## Contributing to *ELPA*
......@@ -105,7 +105,7 @@ documentation, where you find the definition of the interfaces.
It has been, and is, a tremendous effort to develop and maintain the
*ELPA* library. A lot of things can still be done, but our man-power is limited.
Thus every effort and help to improve the *ELPA* library is higly appreciated.
Thus every effort and help to improve the *ELPA* library is highly appreciated.
For details please see the CONTRIBUTING document.
## Documentation how to switch from the legay API to the new API of the *ELPA* library ##
## Documentation how to switch from the legacy API to the new API of the *ELPA* library ##
### Using *ELPA* from a Fortran code ###
......@@ -31,7 +31,7 @@ useGPU=[.false.|.true.], choice of ELPA 2stage kernel .... New parameters were l
the *ELPA* library to reflect the growing functionality.
The new interface of *ELPA* is more generic, which ,however, requires ONCE the adaption of the user code if the new
The new interface of *ELPA* is more generic, which, however, requires ONCE the adaption of the user code if the new
interface should be used.
This are the new steps to do (again it is assumed that MPI and a distributed matrix in block-cyclic scalapack layout is already created in
......@@ -51,21 +51,6 @@ the user application):
e => elpa_allocate()
!> call e%set("solver", ELPA_SOLVER_2STAGE, success)
!> \endcode
!> ... set and get all other options that are desired
!> \code{.f90}
!>
!> ! use method solve to solve the eigenvalue problem
!> ! other possible methods are desribed in \ref elpa_api::elpa_t derived type
!> call e%eigenvectors(a, ev, z, success)
!>
!> ! cleanup
!> call elpa_deallocate(e)
!>
!> call elpa_uninit()
3. set the parameters which describe the matrix setup and the MPI
......@@ -87,7 +72,7 @@ the user application):
5. set/get any possible option (see man pages)
call e%get("qr", qr, success) ! querry whether QR-decomposition is set
call e%get("qr", qr, success) ! query whether QR-decomposition is set
print *, "qr =", qr
if (success .ne. ELPA_OK) stop
......
......@@ -22,7 +22,7 @@ IF AND ONLY IF *ELPA* has been build with support of this legacy interface.
If you want to use the legacy interface, please look to section "B) Using the legacy API of the *ELPA* library.
The legacy API defines all the functionallity as it has been defined in *ELPA* release 2016.11.011. Note, however,
The legacy API defines all the functionality as it has been defined in *ELPA* release 2016.11.011. Note, however,
that all future features of *ELPA* will only be accessible via the new API defined in release 2017.05.001 or later.
## A) Using the final API definition of the *ELPA* library ##
......@@ -89,7 +89,7 @@ The *ELPA* library consists of two main parts:
Both variants of the *ELPA* solvers are available for real or complex singe and double precision valued matrices.
Thus *ELPA* provides the following user functions (see man pages or [online] (http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2016.11.001/html/index.html) for details):
Thus *ELPA* provides the following user functions (see man pages or [online] (http://elpa.mpcdf.mpg.de/html/Documentation/ELPA-2017.11.001/html/index.html) for details):
- elpa_get_communicators : set the row / column communicators for *ELPA*
- elpa_solve_evp_complex_1stage_{single|double} : solve a {single|double} precision complex eigenvalue proplem with the *ELPA 1stage* solver
......@@ -106,7 +106,7 @@ which *ELPA 2stage* compute kernels have been installed and which default kernel
If you want to solve an eigenvalue problem with *ELPA*, you have to decide whether you
want to use *ELPA 1stage* or *ELPA 2stage* solver. Normally, *ELPA 2stage* is the better
choice since it is faster, but there a matrix dimensions where *ELPA 1stage* is supperior.
choice since it is faster, but there are matrix dimensions where *ELPA 1stage* is superior.
Independent of the choice of the solver, the concept of calling *ELPA* is always the same:
......@@ -121,7 +121,7 @@ To solve a Eigenvalue problem of this matrix with *ELPA*, one has
Here is a very simple MPI code snippet for using *ELPA 1stage*: For the definition of all variables
please have a look at the man pages and/or the online documentation (see above). A full version
of a simple example program can be found in ./test_project/src.
of a simple example program can be found in ./test_project_1stage_legacy_api/src.
! All ELPA routines need MPI communicators for communicating within
......@@ -309,7 +309,7 @@ The *ELPA 2stage* solver can be used in the same manner, as the *ELPA 1stage* so
However, the 2 stage solver, can be used with different compute kernels, which offers
more possibilities for configuration.
It is recommended to first call the utillity program
It is recommended to first call the utility program
elpa2_print_kernels
......@@ -325,13 +325,13 @@ the default kernels will be set.
##### Setting the *ELPA 2stage* compute kernels with environment variables#####
If the *ELPA* installation allows setting ther compute kernels with enviroment variables,
If the *ELPA* installation allows setting their compute kernels with environment variables,
setting the variables "REAL_ELPA_KERNEL" and "COMPLEX_ELPA_KERNEL" will set the compute
kernels. The environment variable setting will take precedence over all other settings!
The utility program "elpa2_print_kernels" can list which kernels are available and which
would be choosen. This reflects, as well the setting of the default kernel or the settings
with the environment variables
would be chosen. This reflects the setting of the default kernel as well as the setting
with the environment variables.
##### Setting the *ELPA 2stage* compute kernels with API calls#####
......
......@@ -215,7 +215,7 @@ program test_real_example
na_cols, mpi_comm_rows, mpi_comm_cols, mpi_comm_world)
if (.not.(success)) then
write(error_unit,*) "solve_evp_real_1stage produced an error! Aborting..."
write(error_unit,*) "elpa_solve_evp_real_1stage_double produced an error! Aborting..."
call MPI_ABORT(mpi_comm_world, 1, mpierr)
endif
......
......@@ -42,7 +42,7 @@
!
!>
!> Fortran test programm to demonstrates the use of
!> ELPA 1 real case library.
!> ELPA 2 real case library.
!> If "HAVE_REDIRECT" was defined at build time
!> the stdout and stderr output of each MPI task
!> can be redirected to files if the environment
......@@ -215,7 +215,7 @@ program test_real_example
! Calculate eigenvalues/eigenvectors
if (myid==0) then
print '(a)','| Entering one-step ELPA solver ... '
print '(a)','| Entering two-step ELPA solver ... '
print *
end if
......@@ -223,7 +223,7 @@ program test_real_example
call e%eigenvectors(a, ev, z, success)
if (myid==0) then
print '(a)','| One-step ELPA solver complete.'
print '(a)','| Two-step ELPA solver complete.'
print *
end if
......
......@@ -207,7 +207,7 @@ program test_real_example
! Calculate eigenvalues/eigenvectors
if (myid==0) then
print '(a)','| Entering one-step ELPA solver ... '
print '(a)','| Entering two-step ELPA solver ... '
print *
end if
......@@ -216,12 +216,12 @@ program test_real_example
na_cols, mpi_comm_rows, mpi_comm_cols, mpi_comm_world)
if (.not.(success)) then
write(error_unit,*) "solve_evp_real_1stage produced an error! Aborting..."
write(error_unit,*) "elpa_solve_evp_real_2stage_double produced an error! Aborting..."
call MPI_ABORT(mpi_comm_world, 1, mpierr)
endif
if (myid==0) then
print '(a)','| One-step ELPA solver complete.'
print '(a)','| Two-step ELPA solver complete.'
print *
end if
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment