Use event to handle distributed row gatherer #1882
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
With the current distributed row gatherer , we does the following:
local row gatherer (prepare data for mpi) -> synchronize (it is required by mpi) -> submit mpi ialltoallv -> submit local spmv
Ginkgo will do local spmv and mpi ialltoallv.
However, the local spmv is forced to wait for synchronize() and then submit its kernel, which lead the gap between gpu activity.
Another approach is introduced in this pr to use event
local row gatherer -> record event -> submit local spmv -> wait the event -> submit mpi ialltoallv.
the submission local spmv does not need to wait for local row gatherer to finish, but it will give additional overhead during record event.
For those does not support AsyncEvent, we just synchronize when creating it.
something from the profiler

The local spmv starts earlier (yellow box for gpu activity and gree for cpu). The MPI communication starts later now (pink box). It gives better performance because local spmv still covers the mpi communication. If mpi communication is not covered by the local spmv, this will give slower performance. I will say it is okay-ish now. This situation means it is network bandwidth bound now, which is slow compared to other bandwidth.
We can improve it by introducing thread but it will lead another discussion whether we allow thread in ReferenceExecutor.
There are two approach might also help this situation - stream-aware mpi or async execution.
stream-aware mpi needs more study like using isend/irecv implement all_to_all_v + row gatherer.
async execution I have done something for async schwarz which touching some design question on executor, so it lead larger and longer discussion. Thus, I do not touch it here.