Skip to content

Draft: Make CUDA memory management more robust

Bram Veenboer requested to merge cuda-memory into master

This MR will be split into a number of smaller MR's. When all the 'unresolved threads' are resolved and all the changes have been eventually merged, this MR will be closed.

With various changes to the different CUDA proxies over time, memory management has become too complicated and error-prone. We have seen issues when running IDG on large data sets where strange memory-related issues occur: ranging from memory fragmentation to out-of-memory. Since allocating CUDA device memory is rather cheap, it is safer to let every routine manage the memory it needs, rather than keeping the memory allocated over the lifetime of a Proxy (and InstanceCUDA).

With the current changes, the proxies have become a bit longer and have some duplicated code. However, it is now very explicit what every routine really uses and this makes it easier to make changes to changes to individual routines. For instance, before we reused the d_visibilities buffer from gridding/degridding for another purpose in calibration, introduced a dependency between the two. Such things can no longer happen, which should make memory management more robust.

Edited by Bram Veenboer

Merge request reports

Loading