Skip to content
GitLab
Menu
Projects
Groups
Snippets
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
Menu
Open sidebar
elpa
elpa
Commits
dbc8c1d1
Commit
dbc8c1d1
authored
Jan 05, 2022
by
Andreas Marek
Browse files
Always return a well defined value for GPU functions
parent
7b659cb2
Changes
1
Hide whitespace changes
Inline
Side-by-side
src/GPU/mod_vendor_agnostic_layer.F90
View file @
dbc8c1d1
...
...
@@ -212,6 +212,8 @@ module elpa_gpu
implicit
none
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_devicesynchronize
()
endif
...
...
@@ -231,6 +233,8 @@ module elpa_gpu
integer
(
kind
=
c_intptr_t
),
intent
(
in
)
::
elements
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_malloc_host
(
array
,
elements
)
endif
...
...
@@ -250,6 +254,8 @@ module elpa_gpu
integer
(
kind
=
c_intptr_t
),
intent
(
in
)
::
elements
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_malloc
(
array
,
elements
)
endif
...
...
@@ -270,6 +276,8 @@ module elpa_gpu
integer
(
kind
=
C_INT
),
intent
(
in
)
::
flag
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_host_register
(
array
,
elements
,
flag
)
endif
...
...
@@ -291,6 +299,8 @@ module elpa_gpu
integer
(
kind
=
C_INT
),
intent
(
in
)
::
dir
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_memcpy_intptr
(
dst
,
src
,
size
,
dir
)
endif
...
...
@@ -312,6 +322,8 @@ module elpa_gpu
integer
(
kind
=
C_INT
),
intent
(
in
)
::
dir
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_memcpy_cptr
(
dst
,
src
,
size
,
dir
)
endif
...
...
@@ -333,6 +345,8 @@ module elpa_gpu
integer
(
kind
=
C_INT
),
intent
(
in
)
::
dir
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_memcpy_mixed
(
dst
,
src
,
size
,
dir
)
endif
...
...
@@ -356,6 +370,8 @@ module elpa_gpu
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_memset
(
a
,
val
,
size
)
endif
...
...
@@ -375,6 +391,8 @@ module elpa_gpu
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_free
(
a
)
endif
...
...
@@ -394,6 +412,8 @@ module elpa_gpu
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_free_host
(
a
)
endif
...
...
@@ -413,6 +433,8 @@ module elpa_gpu
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_host_unregister
(
a
)
endif
...
...
@@ -441,6 +463,8 @@ module elpa_gpu
integer
(
kind
=
C_INT
),
intent
(
in
)
::
dir
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_memcpy2d_intptr
(
dst
,
dpitch
,
src
,
spitch
,
width
,
height
,
dir
)
endif
...
...
@@ -467,6 +491,8 @@ module elpa_gpu
integer
(
kind
=
C_INT
),
intent
(
in
)
::
dir
logical
::
success
success
=
.false.
if
(
use_gpu_vendor
==
nvidia_gpu
)
then
success
=
cuda_memcpy2d_cptr
(
dst
,
dpitch
,
src
,
spitch
,
width
,
height
,
dir
)
endif
...
...
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment