elpa issueshttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues2021-08-25T11:50:15Zhttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/86Could ELPA let user specify how many lowest eigenvalues to compute?2021-08-25T11:50:15ZI-Te LuCould ELPA let user specify how many lowest eigenvalues to compute?Dear developers,
I have a quick question about ELPA: is there an ELPA subroutine that gives a few lowest eigenvalues and eigenvectors of a Hermitian matrix without solving the whole matrix? I have tried to look into the source codes, b...Dear developers,
I have a quick question about ELPA: is there an ELPA subroutine that gives a few lowest eigenvalues and eigenvectors of a Hermitian matrix without solving the whole matrix? I have tried to look into the source codes, but could not find one (probably, I missed something...).
Thanks,
I-Tehttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/85GPU improvements in tridi_to_band for non-MPI2021-07-09T07:00:44ZAndreas MarekGPU improvements in tridi_to_band for non-MPIA lot of memory transfers to / from the device could be avoided, similar to the CUDA_AWARE_MPI caseA lot of memory transfers to / from the device could be avoided, similar to the CUDA_AWARE_MPI casehttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/82Check MPI calls within OpenMP parallelized regions2021-09-03T13:04:11ZAndreas MarekCheck MPI calls within OpenMP parallelized regionsCurrently we require the MPI library to provide the threading levels "MPI_THREAD_SERIALIZED" or "MPI_THREAD_MULTIPLE". This is done for safety and might not be necessary for all cases of calling ELPA.
Todo:
- make a list of all MPI call...Currently we require the MPI library to provide the threading levels "MPI_THREAD_SERIALIZED" or "MPI_THREAD_MULTIPLE". This is done for safety and might not be necessary for all cases of calling ELPA.
Todo:
- make a list of all MPI calls (also from subroutines) which are called from within OpenMP parallel regions
- check for all calls, whether it can be guaranteed which thread (master or any) will initiate the communication and which thread (master, or the same who initiated the call, or any) can end the communication
- adapt the required threading level accordinglySoheil SoltaniSoheil Soltanihttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/81Toeplitz test cases hang for realy small matrices na=42021-05-06T12:37:33ZAndreas MarekToeplitz test cases hang for realy small matrices na=4If you use 4 MPI tasks for a setup of na=4 nev=4 nblk=1, the the test-cases for Toeplitz matrices hang.
The test-cases for other matrix setups do work, however.
It seems that the code hangs in the "solve" stepIf you use 4 MPI tasks for a setup of na=4 nev=4 nblk=1, the the test-cases for Toeplitz matrices hang.
The test-cases for other matrix setups do work, however.
It seems that the code hangs in the "solve" stephttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/80Print line can exceed 132 characters on Summit2021-04-15T06:56:48ZAndreas MarekPrint line can exceed 132 characters on SummitOn Summit, when compiling the test programs it can happen (why?) that the line printing
the program name exceeds 132 characters
https://gitlab.mpcdf.mpg.de/elpa/elpa/-/blob/master/test/Fortran/test.F90#L248On Summit, when compiling the test programs it can happen (why?) that the line printing
the program name exceeds 132 characters
https://gitlab.mpcdf.mpg.de/elpa/elpa/-/blob/master/test/Fortran/test.F90#L248https://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/79Print in the test programs the number of GPU's used2021-04-15T06:16:37ZAndreas MarekPrint in the test programs the number of GPU's usedIf ELPA is build with GPU support, print in the startup of the test programs the number of GPUs usedIf ELPA is build with GPU support, print in the startup of the test programs the number of GPUs usedhttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/74Less verbosity if environment variables ELPA_DEFAULT_xxx is set2021-03-24T06:24:33ZAndreas MarekLess verbosity if environment variables ELPA_DEFAULT_xxx is setDo not print on each tasksDo not print on each taskshttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/73Service Desk (from dev@stellardeath.org): A gitlab test issue using the servi...2021-02-24T09:37:42ZGitLab Support BotService Desk (from dev@stellardeath.org): A gitlab test issue using the service-deskFoobarFoobarhttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/72lift the restriction bandwidth == nblk in the GPU version of ELPA 22019-10-29T11:13:22ZPavel Kuslift the restriction bandwidth == nblk in the GPU version of ELPA 2For the GPU version of ELPA 2, the intermediate bandwidth is always taken as the scalapack block size. It would be better, if the optimal value (as for the CPU version) could be selected, since it is very important for performance.
It i...For the GPU version of ELPA 2, the intermediate bandwidth is always taken as the scalapack block size. It would be better, if the optimal value (as for the CPU version) could be selected, since it is very important for performance.
It is, however, hard-coded somewhere in the band reduction step.https://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/71investigate CPU memory allocation2019-10-22T08:27:00ZPavel Kusinvestigate CPU memory allocationand why it is not possible to run with much larger matrix on machines with very large memory (e.g. Optane memory equipped node)and why it is not possible to run with much larger matrix on machines with very large memory (e.g. Optane memory equipped node)Pavel KusPavel Kushttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/70suspected problem for matrix of size 200k2021-02-24T09:45:56ZPavel Kussuspected problem for matrix of size 200kReported by Phillip Coles
-> check again the setupReported by Phillip Coles
-> check again the setupPavel KusPavel Kushttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/69elpa tries to use GPU kernel when it should not2019-10-31T09:37:21ZPavel Kuselpa tries to use GPU kernel when it should nothappening during autotuning
Proposed solution to disable the GPU kernel altogether (does not work/perform anyways)happening during autotuning
Proposed solution to disable the GPU kernel altogether (does not work/perform anyways)Pavel KusPavel Kushttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/68probable memory leak on GPU2019-10-31T09:38:43ZPavel Kusprobable memory leak on GPUcan be observed during autotuning when different routines can run on GPU or CPUcan be observed during autotuning when different routines can run on GPU or CPUPavel KusPavel Kushttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/67cannot build ELPA on talos with cuda 10.12019-11-20T11:05:57ZPavel Kuscannot build ELPA on talos with cuda 10.1works with cuda 10.0works with cuda 10.0Pavel KusPavel Kushttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/66ELPA changes the number of the OpenMP threads, even for calling program2019-04-18T06:46:34ZAndreas MarekELPA changes the number of the OpenMP threads, even for calling programWhen ELPA, e.g. in the autotuning, changes the number of OpenMP threads this is done globally.
But this also affects the calling program.
To cure this, the original number of threads should be stored, and at the end of ELPA restoredWhen ELPA, e.g. in the autotuning, changes the number of OpenMP threads this is done globally.
But this also affects the calling program.
To cure this, the original number of threads should be stored, and at the end of ELPA restoredAndreas MarekAndreas Marekhttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/65API change in elpa_deallocate()2019-02-18T12:44:14ZAsk Hjorth LarsenAPI change in elpa_deallocate()Hi! Please excuse me if this is not the right place to post this, or if I have missed info in the docs.
`elpa_deallocate()` recently got another argument, namely the error code:
https://gitlab.mpcdf.mpg.de/elpa/elpa/commit/69b68de30...Hi! Please excuse me if this is not the right place to post this, or if I have missed info in the docs.
`elpa_deallocate()` recently got another argument, namely the error code:
https://gitlab.mpcdf.mpg.de/elpa/elpa/commit/69b68de30e21d2d959baa426b968e39603ebd758
This will require existing interfaces to be updated as reported here:
https://gitlab.com/gpaw/gpaw/issues/197
Is there a recommended way to write interfaces that are compatible with both this *and* the older version? For example by accessing the version number in the preprocessor?https://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/64ELPA can not be build on various (unsupported?) architectures2017-12-29T14:20:57ZAndreas MarekELPA can not be build on various (unsupported?) architecturessee
https://bugzilla.redhat.com/show_bug.cgi?id=1512229see
https://bugzilla.redhat.com/show_bug.cgi?id=1512229https://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/63Job Failed #386213: on power8 with na=15002019-04-17T19:36:24ZAndreas MarekJob Failed #386213: on power8 with na=1500On Power8 with na=1500
the tests:
test_real_single_hermitian_multiply_1stage_random_default.sh
test_real_single_hermitian_multiply_1stage_gpu_random_default.sh
fail, due to slightly too larger error resdiualsOn Power8 with na=1500
the tests:
test_real_single_hermitian_multiply_1stage_random_default.sh
test_real_single_hermitian_multiply_1stage_gpu_random_default.sh
fail, due to slightly too larger error resdiualshttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/62Gitlab CI causes several problems on new appdev cluster2018-09-05T06:08:15ZAndreas MarekGitlab CI causes several problems on new appdev cluster- Frank matrix does not work with coverage
- GPU runs hang sometimes
- Pinning does not work
- Sometimes "stale file handle"
- Knl 1-4, maik create problems- Frank matrix does not work with coverage
- GPU runs hang sometimes
- Pinning does not work
- Sometimes "stale file handle"
- Knl 1-4, maik create problemshttps://gitlab.mpcdf.mpg.de/elpa/elpa/-/issues/61QR decompostion does not work with "analytic" matrix2017-09-13T09:30:39ZAndreas MarekQR decompostion does not work with "analytic" matrixSince it does not work, this combination is not enabled at the momentSince it does not work, this combination is not enabled at the momentPavel KusPavel Kus