OpenVDB  10.0.1
Public Member Functions | Static Public Member Functions | List of all members
CudaDeviceBuffer Class Reference

Simple memory buffer using un-managed pinned host memory when compiled with NVCC. Obviously this class is making explicit used of CUDA so replace it with your own memory allocator if you are not using CUDA. More...

#include <nanovdb/util/CudaDeviceBuffer.h>

Public Member Functions

 CudaDeviceBuffer (uint64_t size=0)
 
 CudaDeviceBuffer (const CudaDeviceBuffer &)=delete
 Disallow copy-construction. More...
 
 CudaDeviceBuffer (CudaDeviceBuffer &&other) noexcept
 Move copy-constructor. More...
 
CudaDeviceBufferoperator= (const CudaDeviceBuffer &)=delete
 Disallow copy assignment operation. More...
 
CudaDeviceBufferoperator= (CudaDeviceBuffer &&other) noexcept
 Move copy assignment operation. More...
 
 ~CudaDeviceBuffer ()
 Destructor frees memory on both the host and device. More...
 
void init (uint64_t size)
 
uint8_t * data () const
 
uint8_t * deviceData () const
 
void deviceUpload (void *stream=0, bool sync=true) const
 Copy grid from the CPU/host to the GPU/device. If sync is false the memory copy is asynchronous! More...
 
void deviceDownload (void *stream=0, bool sync=true) const
 Copy grid from the GPU/device to the CPU/host. If sync is false the memory copy is asynchronous! More...
 
uint64_t size () const
 Returns the size in bytes of the raw memory buffer managed by this allocator. More...
 
bool empty () const
 Returns true if this allocator is empty, i.e. has no allocated memory. More...
 
void clear ()
 De-allocate all memory managed by this allocator and set all pointer to NULL. More...
 

Static Public Member Functions

static CudaDeviceBuffer create (uint64_t size, const CudaDeviceBuffer *context=nullptr)
 

Detailed Description

Simple memory buffer using un-managed pinned host memory when compiled with NVCC. Obviously this class is making explicit used of CUDA so replace it with your own memory allocator if you are not using CUDA.

Note
While CUDA's pinned host memory allows for asynchronous memory copy between host and device it is significantly slower then cached (un-pinned) memory on the host.

Constructor & Destructor Documentation

CudaDeviceBuffer ( uint64_t  size = 0)
inline
CudaDeviceBuffer ( const CudaDeviceBuffer )
delete

Disallow copy-construction.

CudaDeviceBuffer ( CudaDeviceBuffer &&  other)
inlinenoexcept

Move copy-constructor.

~CudaDeviceBuffer ( )
inline

Destructor frees memory on both the host and device.

Member Function Documentation

void clear ( )
inline

De-allocate all memory managed by this allocator and set all pointer to NULL.

CudaDeviceBuffer create ( uint64_t  size,
const CudaDeviceBuffer context = nullptr 
)
inlinestatic
uint8_t* data ( ) const
inline
Warning
Note that the pointer can be NULL is the allocator was not initialized!
uint8_t* deviceData ( ) const
inline
void deviceDownload ( void *  stream = 0,
bool  sync = true 
) const
inline

Copy grid from the GPU/device to the CPU/host. If sync is false the memory copy is asynchronous!

void deviceUpload ( void *  stream = 0,
bool  sync = true 
) const
inline

Copy grid from the CPU/host to the GPU/device. If sync is false the memory copy is asynchronous!

Note
This will allocate memory on the GPU/device if it is not already allocated
bool empty ( ) const
inline

Returns true if this allocator is empty, i.e. has no allocated memory.

void init ( uint64_t  size)
inline
CudaDeviceBuffer& operator= ( const CudaDeviceBuffer )
delete

Disallow copy assignment operation.

CudaDeviceBuffer& operator= ( CudaDeviceBuffer &&  other)
inlinenoexcept

Move copy assignment operation.

uint64_t size ( ) const
inline

Returns the size in bytes of the raw memory buffer managed by this allocator.