dc.contributor.author | Kenzel, Michael | en_US |
dc.contributor.author | Kerbl, Bernhard | en_US |
dc.contributor.author | Winter, Martin | en_US |
dc.contributor.author | Steinberger, Markus | en_US |
dc.contributor.editor | O'Sullivan, Carol and Schmalstieg, Dieter | en_US |
dc.date.accessioned | 2021-04-25T15:49:27Z | |
dc.date.available | 2021-04-25T15:49:27Z | |
dc.date.issued | 2021 | |
dc.identifier.isbn | 978-3-03868-135-9 | |
dc.identifier.issn | 1017-4656 | |
dc.identifier.uri | https://doi.org/10.2312/egt.20211037 | |
dc.identifier.uri | https://diglib.eg.org:443/handle/10.2312/egt20211037 | |
dc.description.abstract | Since its inception, the CUDA programming model has been continuously evolving. Because the CUDA toolkit aims to consistently expose cutting-edge capabilities for general-purpose compute jobs to its users, the added features in each new version reflect the rapid changes that we observe in GPU architectures. Over the years, the changes in hardware, growing scope of built-in functions and libraries, as well as an advancing C++ standard compliance have expanded the design choices when coding for CUDA, and significantly altered the directives to achieve peak performance. In this tutorial, we give a thorough introduction to the CUDA toolkit, demonstrate how a contemporary application can benefit from recently introduced features and how they can be applied to task-based GPU scheduling in particular. For instance, we will provide detailed examples of use cases for independent thread scheduling, cooperative groups, and the CUDA standard library, libcu++, which are certain to become an integral part of clean coding for CUDA in the near future.
https://cuda-tutorial.github.io/ | en_US |
dc.publisher | The Eurographics Association | en_US |
dc.title | CUDA and Applications to Task-based Programming | en_US |
dc.description.seriesinformation | Eurographics 2021 - Tutorials | |
dc.description.sectionheaders | Tutorials | |
dc.identifier.doi | 10.2312/egt.20211037 | |
dc.identifier.pages | 11-15 | |