-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cuda scheduling #133
Comments
Interesting concurrent queue for scheduling tasks on GPU, the broker queue: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
For numerical computing it would be interesting to schedule and keep track of Cuda kernels
on Nvidia GPUs with an interface similar to the CPU parallel API.
The focus is on task parallelism and dataflow parallelism (task graphs). Data parallelism (
parallelFor
) should be handled in the GPU kernel.From this presentation https://developer.download.nvidia.com/CUDA/training/StreamsAndConcurrencyWebinar.pdf, we can use CudaEvent for synchronizing concurrent kernels:
(note there seems to be a typo in the code it should be
At first glance an event seems to be fired when the stream is empty.
The text was updated successfully, but these errors were encountered: