OpenVDB  12.1.0
NanoVDB.h
Go to the documentation of this file.
1 // Copyright Contributors to the OpenVDB Project
2 // SPDX-License-Identifier: Apache-2.0
3 
4 /*!
5  \file nanovdb/NanoVDB.h
6 
7  \author Ken Museth
8 
9  \date January 8, 2020
10 
11  \brief Implements a light-weight self-contained VDB data-structure in a
12  single file! In other words, this is a significantly watered-down
13  version of the OpenVDB implementation, with few dependencies - so
14  a one-stop-shop for a minimalistic VDB data structure that run on
15  most platforms!
16 
17  \note It is important to note that NanoVDB (by design) is a read-only
18  sparse GPU (and CPU) friendly data structure intended for applications
19  like rendering and collision detection. As such it obviously lacks
20  a lot of the functionality and features of OpenVDB grids. NanoVDB
21  is essentially a compact linearized (or serialized) representation of
22  an OpenVDB tree with getValue methods only. For best performance use
23  the ReadAccessor::getValue method as opposed to the Tree::getValue
24  method. Note that since a ReadAccessor caches previous access patterns
25  it is by design not thread-safe, so use one instantiation per thread
26  (it is very light-weight). Also, it is not safe to copy accessors between
27  the GPU and CPU! In fact, client code should only interface
28  with the API of the Grid class (all other nodes of the NanoVDB data
29  structure can safely be ignored by most client codes)!
30 
31 
32  \warning NanoVDB grids can only be constructed via tools like createNanoGrid
33  or the GridBuilder. This explains why none of the grid nodes defined below
34  have public constructors or destructors.
35 
36  \details Please see the following paper for more details on the data structure:
37  K. Museth, “VDB: High-Resolution Sparse Volumes with Dynamic Topology”,
38  ACM Transactions on Graphics 32(3), 2013, which can be found here:
39  http://www.museth.org/Ken/Publications_files/Museth_TOG13.pdf
40 
41  NanoVDB was first published there: https://dl.acm.org/doi/fullHtml/10.1145/3450623.3464653
42 
43 
44  Overview: This file implements the following fundamental class that when combined
45  forms the backbone of the VDB tree data structure:
46 
47  Coord- a signed integer coordinate
48  Vec3 - a 3D vector
49  Vec4 - a 4D vector
50  BBox - a bounding box
51  Mask - a bitmask essential to the non-root tree nodes
52  Map - an affine coordinate transformation
53  Grid - contains a Tree and a map for world<->index transformations. Use
54  this class as the main API with client code!
55  Tree - contains a RootNode and getValue methods that should only be used for debugging
56  RootNode - the top-level node of the VDB data structure
57  InternalNode - the internal nodes of the VDB data structure
58  LeafNode - the lowest level tree nodes that encode voxel values and state
59  ReadAccessor - implements accelerated random access operations
60 
61  Semantics: A VDB data structure encodes values and (binary) states associated with
62  signed integer coordinates. Values encoded at the leaf node level are
63  denoted voxel values, and values associated with other tree nodes are referred
64  to as tile values, which by design cover a larger coordinate index domain.
65 
66 
67  Memory layout:
68 
69  It's important to emphasize that all the grid data (defined below) are explicitly 32 byte
70  aligned, which implies that any memory buffer that contains a NanoVDB grid must also be at
71  32 byte aligned. That is, the memory address of the beginning of a buffer (see ascii diagram below)
72  must be divisible by 32, i.e. uintptr_t(&buffer)%32 == 0! If this is not the case, the C++ standard
73  says the behaviour is undefined! Normally this is not a concerns on GPUs, because they use 256 byte
74  aligned allocations, but the same cannot be said about the CPU.
75 
76  GridData is always at the very beginning of the buffer immediately followed by TreeData!
77  The remaining nodes and blind-data are allowed to be scattered throughout the buffer,
78  though in practice they are arranged as:
79 
80  GridData: 672 bytes (e.g. magic, checksum, major, flags, index, count, size, name, map, world bbox, voxel size, class, type, offset, count)
81 
82  TreeData: 64 bytes (node counts and byte offsets)
83 
84  ... optional padding ...
85 
86  RootData: size depends on ValueType (index bbox, voxel count, tile count, min/max/avg/standard deviation)
87 
88  Array of: RootData::Tile
89 
90  ... optional padding ...
91 
92  Array of: Upper InternalNodes of size 32^3: bbox, two bit masks, 32768 tile values, and min/max/avg/standard deviation values
93 
94  ... optional padding ...
95 
96  Array of: Lower InternalNodes of size 16^3: bbox, two bit masks, 4096 tile values, and min/max/avg/standard deviation values
97 
98  ... optional padding ...
99 
100  Array of: LeafNodes of size 8^3: bbox, bit masks, 512 voxel values, and min/max/avg/standard deviation values
101 
102  ... optional padding ...
103 
104  Array of: GridBlindMetaData (288 bytes). The offset and count are defined in GridData::mBlindMetadataOffset and GridData::mBlindMetadataCount
105 
106  ... optional padding ...
107 
108  Array of: blind data
109 
110  Notation: "]---[" implies it has optional padding, and "][" implies zero padding
111 
112  [GridData(672B)][TreeData(64B)]---[RootData][N x Root::Tile]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
113  ^ ^ ^ ^ ^ ^ ^
114  | | | | | | GridBlindMetaData*
115  +-- Start of 32B aligned buffer | | | | +-- Node0::DataType* leafData
116  GridType::DataType* gridData | | | |
117  | | | +-- Node1::DataType* lowerData
118  RootType::DataType* rootData --+ | |
119  | +-- Node2::DataType* upperData
120  |
121  +-- RootType::DataType::Tile* tile
122 
123 */
124 
125 #ifndef NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
126 #define NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
127 
128 // The following two header files are the only mandatory dependencies
129 #include <nanovdb/util/Util.h>// for __hostdev__ and lots of other utility functions
130 #include <nanovdb/math/Math.h>// for Coord, BBox, Vec3, Vec4 etc
131 
132 // Do not change this value! 32 byte alignment is fixed in NanoVDB
133 #define NANOVDB_DATA_ALIGNMENT 32
134 
135 // NANOVDB_MAGIC_NUMB previously used for both grids and files (starting with v32.6.0)
136 // NANOVDB_MAGIC_GRID currently used exclusively for grids (serialized to a single buffer)
137 // NANOVDB_MAGIC_FILE currently used exclusively for files
138 // | : 0 in 30 corresponds to 0 in NanoVDB0
139 #define NANOVDB_MAGIC_NUMB 0x304244566f6e614eUL // "NanoVDB0" in hex - little endian (uint64_t)
140 #define NANOVDB_MAGIC_GRID 0x314244566f6e614eUL // "NanoVDB1" in hex - little endian (uint64_t)
141 #define NANOVDB_MAGIC_FILE 0x324244566f6e614eUL // "NanoVDB2" in hex - little endian (uint64_t)
142 #define NANOVDB_MAGIC_MASK 0x00FFFFFFFFFFFFFFUL // use this mask to remove the number
143 
144 #define NANOVDB_USE_NEW_MAGIC_NUMBERS// enables use of the new magic numbers described above
145 
146 #define NANOVDB_MAJOR_VERSION_NUMBER 32 // reflects changes to the ABI and hence also the file format
147 #define NANOVDB_MINOR_VERSION_NUMBER 8 // reflects changes to the API but not ABI
148 #define NANOVDB_PATCH_VERSION_NUMBER 0 // reflects changes that does not affect the ABI or API
149 
150 #define TBB_SUPPRESS_DEPRECATED_MESSAGES 1
151 
152 // This replaces a Coord key at the root level with a single uint64_t
153 #define NANOVDB_USE_SINGLE_ROOT_KEY
154 
155 // This replaces three levels of Coord keys in the ReadAccessor with one Coord
156 //#define NANOVDB_USE_SINGLE_ACCESSOR_KEY
157 
158 // Use this to switch between std::ofstream or FILE implementations
159 //#define NANOVDB_USE_IOSTREAMS
160 
161 #define NANOVDB_FPN_BRANCHLESS
162 
163 #if !defined(NANOVDB_ALIGN)
164 #define NANOVDB_ALIGN(n) alignas(n)
165 #endif // !defined(NANOVDB_ALIGN)
166 
167 namespace nanovdb {// =================================================================
168 
169 // --------------------------> Build types <------------------------------------
170 
171 /// @brief Dummy type for a voxel whose value equals an offset into an external value array
172 class ValueIndex{};
173 
174 /// @brief Dummy type for a voxel whose value equals an offset into an external value array of active values
175 class ValueOnIndex{};
176 
177 /// @brief Like @c ValueIndex but with a mutable mask
179 
180 /// @brief Like @c ValueOnIndex but with a mutable mask
182 
183 /// @brief Dummy type for a voxel whose value equals its binary active state
184 class ValueMask{};
185 
186 /// @brief Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
187 class Half{};
188 
189 /// @brief Dummy type for a 4bit quantization of float point values
190 class Fp4{};
191 
192 /// @brief Dummy type for a 8bit quantization of float point values
193 class Fp8{};
194 
195 /// @brief Dummy type for a 16bit quantization of float point values
196 class Fp16{};
197 
198 /// @brief Dummy type for a variable bit quantization of floating point values
199 class FpN{};
200 
201 /// @brief Dummy type for indexing points into voxels
202 class Point{};
203 
204 // --------------------------> GridType <------------------------------------
205 
206 /// @brief return the number of characters (including null termination) required to convert enum type to a string
207 ///
208 /// @note This curious implementation, which subtracts End from StrLen, avoids duplicate values in the enum!
209 template <class EnumT>
210 __hostdev__ inline constexpr uint32_t strlen(){return (uint32_t)EnumT::StrLen - (uint32_t)EnumT::End;}
211 
212 /// @brief List of types that are currently supported by NanoVDB
213 ///
214 /// @note To expand on this list do:
215 /// 1) Add the new type between Unknown and End in the enum below
216 /// 2) Add the new type to OpenToNanoVDB::processGrid that maps OpenVDB types to GridType
217 /// 3) Verify that the ConvertTrait in NanoToOpenVDB.h works correctly with the new type
218 /// 4) Add the new type to toGridType (defined below) that maps NanoVDB types to GridType
219 /// 5) Add the new type to toStr (defined below)
220 enum class GridType : uint32_t { Unknown = 0, // unknown value type - should rarely be used
221  Float = 1, // single precision floating point value
222  Double = 2, // double precision floating point value
223  Int16 = 3, // half precision signed integer value
224  Int32 = 4, // single precision signed integer value
225  Int64 = 5, // double precision signed integer value
226  Vec3f = 6, // single precision floating 3D vector
227  Vec3d = 7, // double precision floating 3D vector
228  Mask = 8, // no value, just the active state
229  Half = 9, // half precision floating point value (placeholder for IEEE 754 Half)
230  UInt32 = 10, // single precision unsigned integer value
231  Boolean = 11, // boolean value, encoded in bit array
232  RGBA8 = 12, // RGBA packed into 32bit word in reverse-order, i.e. R is lowest byte.
233  Fp4 = 13, // 4bit quantization of floating point value
234  Fp8 = 14, // 8bit quantization of floating point value
235  Fp16 = 15, // 16bit quantization of floating point value
236  FpN = 16, // variable bit quantization of floating point value
237  Vec4f = 17, // single precision floating 4D vector
238  Vec4d = 18, // double precision floating 4D vector
239  Index = 19, // index into an external array of active and inactive values
240  OnIndex = 20, // index into an external array of active values
241  IndexMask = 21, // like Index but with a mutable mask
242  OnIndexMask = 22, // like OnIndex but with a mutable mask
243  PointIndex = 23, // voxels encode indices to co-located points
244  Vec3u8 = 24, // 8bit quantization of floating point 3D vector (only as blind data)
245  Vec3u16 = 25, // 16bit quantization of floating point 3D vector (only as blind data)
246  UInt8 = 26, // 8 bit unsigned integer values (eg 0 -> 255 gray scale)
247  End = 27,// total number of types in this enum (excluding StrLen since it's not a type)
248  StrLen = End + 12};// this entry is used to determine the minimum size of c-string
249 
250 /// @brief Maps a GridType to a c-string
251 /// @param dst destination string of size 12 or larger
252 /// @param gridType GridType enum to be mapped to a string
253 /// @return Retuns a c-string used to describe a GridType
254 __hostdev__ inline char* toStr(char *dst, GridType gridType)
255 {
256  switch (gridType){
257  case GridType::Unknown: return util::strcpy(dst, "?");
258  case GridType::Float: return util::strcpy(dst, "float");
259  case GridType::Double: return util::strcpy(dst, "double");
260  case GridType::Int16: return util::strcpy(dst, "int16");
261  case GridType::Int32: return util::strcpy(dst, "int32");
262  case GridType::Int64: return util::strcpy(dst, "int64");
263  case GridType::Vec3f: return util::strcpy(dst, "Vec3f");
264  case GridType::Vec3d: return util::strcpy(dst, "Vec3d");
265  case GridType::Mask: return util::strcpy(dst, "Mask");
266  case GridType::Half: return util::strcpy(dst, "Half");
267  case GridType::UInt32: return util::strcpy(dst, "uint32");
268  case GridType::Boolean: return util::strcpy(dst, "bool");
269  case GridType::RGBA8: return util::strcpy(dst, "RGBA8");
270  case GridType::Fp4: return util::strcpy(dst, "Float4");
271  case GridType::Fp8: return util::strcpy(dst, "Float8");
272  case GridType::Fp16: return util::strcpy(dst, "Float16");
273  case GridType::FpN: return util::strcpy(dst, "FloatN");
274  case GridType::Vec4f: return util::strcpy(dst, "Vec4f");
275  case GridType::Vec4d: return util::strcpy(dst, "Vec4d");
276  case GridType::Index: return util::strcpy(dst, "Index");
277  case GridType::OnIndex: return util::strcpy(dst, "OnIndex");
278  case GridType::IndexMask: return util::strcpy(dst, "IndexMask");
279  case GridType::OnIndexMask: return util::strcpy(dst, "OnIndexMask");// StrLen = 11 + 1 + End
280  case GridType::PointIndex: return util::strcpy(dst, "PointIndex");
281  case GridType::Vec3u8: return util::strcpy(dst, "Vec3u8");
282  case GridType::Vec3u16: return util::strcpy(dst, "Vec3u16");
283  case GridType::UInt8: return util::strcpy(dst, "uint8");
284  default: return util::strcpy(dst, "End");
285  }
286 }
287 
288 // --------------------------> GridClass <------------------------------------
289 
290 /// @brief Classes (superset of OpenVDB) that are currently supported by NanoVDB
291 enum class GridClass : uint32_t { Unknown = 0,
292  LevelSet = 1, // narrow band level set, e.g. SDF
293  FogVolume = 2, // fog volume, e.g. density
294  Staggered = 3, // staggered MAC grid, e.g. velocity
295  PointIndex = 4, // point index grid
296  PointData = 5, // point data grid
297  Topology = 6, // grid with active states only (no values)
298  VoxelVolume = 7, // volume of geometric cubes, e.g. colors cubes in Minecraft
299  IndexGrid = 8, // grid whose values are offsets, e.g. into an external array
300  TensorGrid = 9, // Index grid for indexing learnable tensor features
301  End = 10,// total number of types in this enum (excluding StrLen since it's not a type)
302  StrLen = End + 7};// this entry is used to determine the minimum size of c-string
303 
304 
305 /// @brief Retuns a c-string used to describe a GridClass
306 /// @param dst destination string of size 7 or larger
307 /// @param gridClass GridClass enum to be converted to a string
308 __hostdev__ inline char* toStr(char *dst, GridClass gridClass)
309 {
310  switch (gridClass){
311  case GridClass::Unknown: return util::strcpy(dst, "?");
312  case GridClass::LevelSet: return util::strcpy(dst, "SDF");
313  case GridClass::FogVolume: return util::strcpy(dst, "FOG");
314  case GridClass::Staggered: return util::strcpy(dst, "MAC");
315  case GridClass::PointIndex: return util::strcpy(dst, "PNTIDX");// StrLen = 6 + 1 + End
316  case GridClass::PointData: return util::strcpy(dst, "PNTDAT");
317  case GridClass::Topology: return util::strcpy(dst, "TOPO");
318  case GridClass::VoxelVolume: return util::strcpy(dst, "VOX");
319  case GridClass::IndexGrid: return util::strcpy(dst, "INDEX");
320  case GridClass::TensorGrid: return util::strcpy(dst, "TENSOR");
321  default: return util::strcpy(dst, "END");
322  }
323 }
324 
325 // --------------------------> GridFlags <------------------------------------
326 
327 /// @brief Grid flags which indicate what extra information is present in the grid buffer.
328 enum class GridFlags : uint32_t {
329  HasLongGridName = 1 << 0, // grid name is longer than 256 characters
330  HasBBox = 1 << 1, // nodes contain bounding-boxes of active values
331  HasMinMax = 1 << 2, // nodes contain min/max of active values
332  HasAverage = 1 << 3, // nodes contain averages of active values
333  HasStdDeviation = 1 << 4, // nodes contain standard deviations of active values
334  IsBreadthFirst = 1 << 5, // nodes are typically arranged breadth-first in memory
335  End = 1 << 6, // use End - 1 as a mask for the 5 lower bit flags
336  StrLen = End + 23,// this entry is used to determine the minimum size of c-string
337 };
338 
339 /// @brief Retuns a c-string used to describe a GridFlags
340 /// @param dst destination string of size 23 or larger
341 /// @param gridFlags GridFlags enum to be converted to a string
342 __hostdev__ inline const char* toStr(char *dst, GridFlags gridFlags)
343 {
344  switch (gridFlags){
345  case GridFlags::HasLongGridName: return util::strcpy(dst, "has long grid name");
346  case GridFlags::HasBBox: return util::strcpy(dst, "has bbox");
347  case GridFlags::HasMinMax: return util::strcpy(dst, "has min/max");
348  case GridFlags::HasAverage: return util::strcpy(dst, "has average");
349  case GridFlags::HasStdDeviation: return util::strcpy(dst, "has standard deviation");// StrLen = 22 + 1 + End
350  case GridFlags::IsBreadthFirst: return util::strcpy(dst, "is breadth-first");
351  default: return util::strcpy(dst, "end");
352  }
353 }
354 
355 // --------------------------> MagicType <------------------------------------
356 
357 /// @brief Enums used to identify magic numbers recognized by NanoVDB
358 enum class MagicType : uint32_t { Unknown = 0,// first 64 bits are neither of the cases below
359  OpenVDB = 1,// first 32 bits = 0x56444220UL
360  NanoVDB = 2,// first 64 bits = NANOVDB_MAGIC_NUMB
361  NanoGrid = 3,// first 64 bits = NANOVDB_MAGIC_GRID
362  NanoFile = 4,// first 64 bits = NANOVDB_MAGIC_FILE
363  End = 5,
364  StrLen = End + 14};// this entry is used to determine the minimum size of c-string
365 
366 /// @brief maps 64 bits of magic number to enum
367 __hostdev__ inline MagicType toMagic(uint64_t magic)
368 {
369  switch (magic){
373  default: return (magic & ~uint32_t(0)) == 0x56444220UL ? MagicType::OpenVDB : MagicType::Unknown;
374  }
375 }
376 
377 /// @brief print 64-bit magic number to string
378 /// @param dst destination string of size 25 or larger
379 /// @param magic 64 bit magic number to be printed
380 /// @return return destination string @c dst
381 __hostdev__ inline char* toStr(char *dst, MagicType magic)
382 {
383  switch (magic){
384  case MagicType::Unknown: return util::strcpy(dst, "unknown");
385  case MagicType::NanoVDB: return util::strcpy(dst, "nanovdb");
386  case MagicType::NanoGrid: return util::strcpy(dst, "nanovdb::Grid");// StrLen = 13 + 1 + End
387  case MagicType::NanoFile: return util::strcpy(dst, "nanovdb::File");
388  case MagicType::OpenVDB: return util::strcpy(dst, "openvdb");
389  default: return util::strcpy(dst, "end");
390  }
391 }
392 
393 // --------------------------> PointType enums <------------------------------------
394 
395 // Define the type used when the points are encoded as blind data in the output grid
396 enum class PointType : uint32_t { Disable = 0,// no point information e.g. when BuildT != Point
397  PointID = 1,// linear index of type uint32_t to points
398  World64 = 2,// Vec3d in world space
399  World32 = 3,// Vec3f in world space
400  Grid64 = 4,// Vec3d in grid space
401  Grid32 = 5,// Vec3f in grid space
402  Voxel32 = 6,// Vec3f in voxel space
403  Voxel16 = 7,// Vec3u16 in voxel space
404  Voxel8 = 8,// Vec3u8 in voxel space
405  Default = 9,// output matches input, i.e. Vec3d or Vec3f in world space
406  End =10 };
407 
408 // --------------------------> GridBlindData enums <------------------------------------
409 
410 /// @brief Blind-data Classes that are currently supported by NanoVDB
411 enum class GridBlindDataClass : uint32_t { Unknown = 0,
412  IndexArray = 1,
413  AttributeArray = 2,
414  GridName = 3,
415  ChannelArray = 4,
416  End = 5 };
417 
418 /// @brief Blind-data Semantics that are currently understood by NanoVDB
419 enum class GridBlindDataSemantic : uint32_t { Unknown = 0,
420  PointPosition = 1, // 3D coordinates in an unknown space
421  PointColor = 2,
422  PointNormal = 3,
423  PointRadius = 4,
424  PointVelocity = 5,
425  PointId = 6,
426  WorldCoords = 7, // 3D coordinates in world space, e.g. (0.056, 0.8, 1,8)
427  GridCoords = 8, // 3D coordinates in grid space, e.g. (1.2, 4.0, 5.7), aka index-space
428  VoxelCoords = 9, // 3D coordinates in voxel space, e.g. (0.2, 0.0, 0.7)
429  End = 10 };
430 
431 // --------------------------> BuildTraits <------------------------------------
432 
433 /// @brief Define static boolean tests for template build types
434 template<typename T>
436 {
437  // check if T is an index type
440  static constexpr bool is_offindex = util::is_same<T, ValueIndex, ValueIndexMask>::value;
441  static constexpr bool is_indexmask = util::is_same<T, ValueIndexMask, ValueOnIndexMask>::value;
442  // check if T is a compressed float type with fixed bit precision
443  static constexpr bool is_FpX = util::is_same<T, Fp4, Fp8, Fp16>::value;
444  // check if T is a compressed float type with fixed or variable bit precision
445  static constexpr bool is_Fp = util::is_same<T, Fp4, Fp8, Fp16, FpN>::value;
446  // check if T is a POD float type, i.e float or double
447  static constexpr bool is_float = util::is_floating_point<T>::value;
448  // check if T is a template specialization of LeafData<T>, i.e. has T mValues[512]
449  static constexpr bool is_special = is_index || is_Fp || util::is_same<T, Point, bool, ValueMask>::value;
450 }; // BuildTraits
451 
452 // --------------------------> BuildToValueMap <------------------------------------
453 
454 /// @brief Maps one type (e.g. the build types above) to other (actual) types
455 template<typename T>
457 {
458  using Type = T;
459  using type = T;
460 };
461 
462 template<>
464 {
465  using Type = uint64_t;
466  using type = uint64_t;
467 };
468 
469 template<>
471 {
472  using Type = uint64_t;
473  using type = uint64_t;
474 };
475 
476 template<>
478 {
479  using Type = uint64_t;
480  using type = uint64_t;
481 };
482 
483 template<>
485 {
486  using Type = uint64_t;
487  using type = uint64_t;
488 };
489 
490 template<>
492 {
493  using Type = bool;
494  using type = bool;
495 };
496 
497 template<>
499 {
500  using Type = float;
501  using type = float;
502 };
503 
504 template<>
506 {
507  using Type = float;
508  using type = float;
509 };
510 
511 template<>
513 {
514  using Type = float;
515  using type = float;
516 };
517 
518 template<>
520 {
521  using Type = float;
522  using type = float;
523 };
524 
525 template<>
527 {
528  using Type = float;
529  using type = float;
530 };
531 
532 template<>
534 {
535  using Type = uint64_t;
536  using type = uint64_t;
537 };
538 
539 // --------------------------> utility functions related to alignment <------------------------------------
540 
541 /// @brief return true if the specified pointer is 32 byte aligned
542 __hostdev__ inline static bool isAligned(const void* p){return uint64_t(p) % NANOVDB_DATA_ALIGNMENT == 0;}
543 
544 /// @brief return the smallest number of bytes that when added to the specified pointer results in a 32 byte aligned pointer.
545 __hostdev__ inline static uint64_t alignmentPadding(const void* p)
546 {
547  NANOVDB_ASSERT(p);
549 }
550 
551 /// @brief offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers.
552 template <typename T>
553 __hostdev__ inline static T* alignPtr(T* p){return util::PtrAdd<T>(p, alignmentPadding(p));}
554 
555 // --------------------------> isFloatingPoint(GridType) <------------------------------------
556 
557 /// @brief return true if the GridType maps to a floating point type
558 __hostdev__ inline bool isFloatingPoint(GridType gridType)
559 {
560  return gridType == GridType::Float ||
561  gridType == GridType::Double ||
562  gridType == GridType::Half ||
563  gridType == GridType::Fp4 ||
564  gridType == GridType::Fp8 ||
565  gridType == GridType::Fp16 ||
566  gridType == GridType::FpN;
567 }
568 
569 // --------------------------> isFloatingPointVector(GridType) <------------------------------------
570 
571 /// @brief return true if the GridType maps to a floating point vec3.
573 {
574  return gridType == GridType::Vec3f ||
575  gridType == GridType::Vec3d ||
576  gridType == GridType::Vec4f ||
577  gridType == GridType::Vec4d;
578 }
579 
580 // --------------------------> isInteger(GridType) <------------------------------------
581 
582 /// @brief Return true if the GridType maps to a POD integer type.
583 /// @details These types are used to associate a voxel with a POD integer type
584 __hostdev__ inline bool isInteger(GridType gridType)
585 {
586  return gridType == GridType::Int16 ||
587  gridType == GridType::Int32 ||
588  gridType == GridType::Int64 ||
589  gridType == GridType::UInt32||
590  gridType == GridType::UInt8;
591 }
592 
593 // --------------------------> isIndex(GridType) <------------------------------------
594 
595 /// @brief Return true if the GridType maps to a special index type (not a POD integer type).
596 /// @details These types are used to index from a voxel into an external array of values, e.g. sidecar or blind data.
597 __hostdev__ inline bool isIndex(GridType gridType)
598 {
599  return gridType == GridType::Index ||// index both active and inactive values
600  gridType == GridType::OnIndex ||// index active values only
601  gridType == GridType::IndexMask ||// as Index, but with an additional mask
602  gridType == GridType::OnIndexMask;// as OnIndex, but with an additional mask
603 }
604 
605 // --------------------------> isValue(GridType, GridClass) <------------------------------------
606 
607 /// @brief return true if the combination of GridType and GridClass is valid.
608 __hostdev__ inline bool isValid(GridType gridType, GridClass gridClass)
609 {
610  if (gridClass == GridClass::LevelSet || gridClass == GridClass::FogVolume) {
611  return isFloatingPoint(gridType);
612  } else if (gridClass == GridClass::Staggered) {
613  return isFloatingPointVector(gridType);
614  } else if (gridClass == GridClass::PointIndex || gridClass == GridClass::PointData) {
615  return gridType == GridType::PointIndex || gridType == GridType::UInt32;
616  } else if (gridClass == GridClass::Topology) {
617  return gridType == GridType::Mask;
618  } else if (gridClass == GridClass::IndexGrid) {
619  return isIndex(gridType);
620  } else if (gridClass == GridClass::VoxelVolume) {
621  return gridType == GridType::RGBA8 || gridType == GridType::Float ||
622  gridType == GridType::Double || gridType == GridType::Vec3f ||
623  gridType == GridType::Vec3d || gridType == GridType::UInt32 ||
624  gridType == GridType::UInt8;
625  }
626  return gridClass < GridClass::End && gridType < GridType::End; // any valid combination
627 }
628 
629 // --------------------------> validation of blind data meta data <------------------------------------
630 
631 /// @brief return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid.
632 __hostdev__ inline bool isValid(const GridBlindDataClass& blindClass,
633  const GridBlindDataSemantic& blindSemantics,
634  const GridType& blindType)
635 {
636  bool test = false;
637  switch (blindClass) {
639  test = (blindSemantics == GridBlindDataSemantic::Unknown ||
640  blindSemantics == GridBlindDataSemantic::PointId) &&
641  isInteger(blindType);
642  break;
644  if (blindSemantics == GridBlindDataSemantic::PointPosition ||
645  blindSemantics == GridBlindDataSemantic::WorldCoords) {
646  test = blindType == GridType::Vec3f || blindType == GridType::Vec3d;
647  } else if (blindSemantics == GridBlindDataSemantic::GridCoords) {
648  test = blindType == GridType::Vec3f;
649  } else if (blindSemantics == GridBlindDataSemantic::VoxelCoords) {
650  test = blindType == GridType::Vec3f || blindType == GridType::Vec3u8 || blindType == GridType::Vec3u16;
651  } else {
652  test = blindSemantics != GridBlindDataSemantic::PointId;
653  }
654  break;
656  test = blindSemantics == GridBlindDataSemantic::Unknown && blindType == GridType::Unknown;
657  break;
658  default: // captures blindClass == Unknown and ChannelArray
659  test = blindClass < GridBlindDataClass::End &&
660  blindSemantics < GridBlindDataSemantic::End &&
661  blindType < GridType::End; // any valid combination
662  break;
663  }
664  //if (!test) printf("Invalid combination: GridBlindDataClass=%u, GridBlindDataSemantic=%u, GridType=%u\n",(uint32_t)blindClass, (uint32_t)blindSemantics, (uint32_t)blindType);
665  return test;
666 }
667 
668 // ----------------------------> Version class <-------------------------------------
669 
670 /// @brief Bit-compacted representation of all three version numbers
671 ///
672 /// @details major is the top 11 bits, minor is the 11 middle bits and patch is the lower 10 bits
673 class Version
674 {
675  uint32_t mData; // 11 + 11 + 10 bit packing of major + minor + patch
676 public:
677  static constexpr uint32_t End = 0, StrLen = 8;// for strlen<Version>()
678  /// @brief Default constructor
680  : mData(uint32_t(NANOVDB_MAJOR_VERSION_NUMBER) << 21 |
681  uint32_t(NANOVDB_MINOR_VERSION_NUMBER) << 10 |
683  {
684  }
685  /// @brief Constructor from a raw uint32_t data representation
686  __hostdev__ Version(uint32_t data) : mData(data) {}
687  /// @brief Constructor from major.minor.patch version numbers
688  __hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
689  : mData(major << 21 | minor << 10 | patch)
690  {
691  NANOVDB_ASSERT(major < (1u << 11)); // max value of major is 2047
692  NANOVDB_ASSERT(minor < (1u << 11)); // max value of minor is 2047
693  NANOVDB_ASSERT(patch < (1u << 10)); // max value of patch is 1023
694  }
695  __hostdev__ bool operator==(const Version& rhs) const { return mData == rhs.mData; }
696  __hostdev__ bool operator<( const Version& rhs) const { return mData < rhs.mData; }
697  __hostdev__ bool operator<=(const Version& rhs) const { return mData <= rhs.mData; }
698  __hostdev__ bool operator>( const Version& rhs) const { return mData > rhs.mData; }
699  __hostdev__ bool operator>=(const Version& rhs) const { return mData >= rhs.mData; }
700  __hostdev__ uint32_t id() const { return mData; }
701  __hostdev__ uint32_t getMajor() const { return (mData >> 21) & ((1u << 11) - 1); }
702  __hostdev__ uint32_t getMinor() const { return (mData >> 10) & ((1u << 11) - 1); }
703  __hostdev__ uint32_t getPatch() const { return mData & ((1u << 10) - 1); }
704  __hostdev__ bool isCompatible() const { return this->getMajor() == uint32_t(NANOVDB_MAJOR_VERSION_NUMBER); }
705  /// @brief Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER
706  /// @return return 0 if the major version equals NANOVDB_MAJOR_VERSION_NUMBER, else a negative age if this
707  /// instance has a smaller major verion (is older), and a positive age if it is newer, i.e. larger.
708  __hostdev__ int age() const {return int(this->getMajor()) - int(NANOVDB_MAJOR_VERSION_NUMBER);}
709 }; // Version
710 
711 /// @brief print the verion number to a c-string
712 /// @param dst destination string of size 8 or more
713 /// @param v version to be printed
714 /// @return returns destination string @c dst
715 __hostdev__ inline char* toStr(char *dst, const Version &v)
716 {
717  return util::sprint(dst, v.getMajor(), ".",v.getMinor(), ".",v.getPatch());
718 }
719 
720 // ----------------------------> TensorTraits <--------------------------------------
721 
722 template<typename T, int Rank = (util::is_specialization<T, math::Vec3>::value || util::is_specialization<T, math::Vec4>::value || util::is_same<T, math::Rgba8>::value) ? 1 : 0>
724 
725 template<typename T>
726 struct TensorTraits<T, 0>
727 {
728  static const int Rank = 0; // i.e. scalar
729  static const bool IsScalar = true;
730  static const bool IsVector = false;
731  static const int Size = 1;
732  using ElementType = T;
733  static T scalar(const T& s) { return s; }
734 };
735 
736 template<typename T>
737 struct TensorTraits<T, 1>
738 {
739  static const int Rank = 1; // i.e. vector
740  static const bool IsScalar = false;
741  static const bool IsVector = true;
742  static const int Size = T::SIZE;
743  using ElementType = typename T::ValueType;
744  static ElementType scalar(const T& v) { return v.length(); }
745 };
746 
747 // ----------------------------> FloatTraits <--------------------------------------
748 
749 template<typename T, int = sizeof(typename TensorTraits<T>::ElementType)>
751 {
752  using FloatType = float;
753 };
754 
755 template<typename T>
756 struct FloatTraits<T, 8>
757 {
758  using FloatType = double;
759 };
760 
761 template<>
762 struct FloatTraits<bool, 1>
763 {
764  using FloatType = bool;
765 };
766 
767 template<>
768 struct FloatTraits<ValueIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
769 {
770  using FloatType = uint64_t;
771 };
772 
773 template<>
774 struct FloatTraits<ValueIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
775 {
776  using FloatType = uint64_t;
777 };
778 
779 template<>
780 struct FloatTraits<ValueOnIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
781 {
782  using FloatType = uint64_t;
783 };
784 
785 template<>
786 struct FloatTraits<ValueOnIndexMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
787 {
788  using FloatType = uint64_t;
789 };
790 
791 template<>
792 struct FloatTraits<ValueMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
793 {
794  using FloatType = bool;
795 };
796 
797 template<>
798 struct FloatTraits<Point, 1> // size of empty class in C++ is 1 byte and not 0 byte
799 {
800  using FloatType = double;
801 };
802 
803 // ----------------------------> mapping BuildType -> GridType <--------------------------------------
804 
805 /// @brief Maps from a templated build type to a GridType enum
806 template<typename BuildT>
808 {
809  if constexpr(util::is_same<BuildT, float>::value) { // resolved at compile-time
810  return GridType::Float;
811  } else if constexpr(util::is_same<BuildT, double>::value) {
812  return GridType::Double;
813  } else if constexpr(util::is_same<BuildT, int16_t>::value) {
814  return GridType::Int16;
815  } else if constexpr(util::is_same<BuildT, int32_t>::value) {
816  return GridType::Int32;
817  } else if constexpr(util::is_same<BuildT, int64_t>::value) {
818  return GridType::Int64;
819  } else if constexpr(util::is_same<BuildT, Vec3f>::value) {
820  return GridType::Vec3f;
821  } else if constexpr(util::is_same<BuildT, Vec3d>::value) {
822  return GridType::Vec3d;
823  } else if constexpr(util::is_same<BuildT, uint32_t>::value) {
824  return GridType::UInt32;
825  } else if constexpr(util::is_same<BuildT, ValueMask>::value) {
826  return GridType::Mask;
827  } else if constexpr(util::is_same<BuildT, Half>::value) {
828  return GridType::Half;
829  } else if constexpr(util::is_same<BuildT, ValueIndex>::value) {
830  return GridType::Index;
831  } else if constexpr(util::is_same<BuildT, ValueOnIndex>::value) {
832  return GridType::OnIndex;
833  } else if constexpr(util::is_same<BuildT, ValueIndexMask>::value) {
834  return GridType::IndexMask;
836  return GridType::OnIndexMask;
837  } else if constexpr(util::is_same<BuildT, bool>::value) {
838  return GridType::Boolean;
839  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
840  return GridType::RGBA8;
841  } else if constexpr(util::is_same<BuildT, Fp4>::value) {
842  return GridType::Fp4;
843  } else if constexpr(util::is_same<BuildT, Fp8>::value) {
844  return GridType::Fp8;
845  } else if constexpr(util::is_same<BuildT, Fp16>::value) {
846  return GridType::Fp16;
847  } else if constexpr(util::is_same<BuildT, FpN>::value) {
848  return GridType::FpN;
849  } else if constexpr(util::is_same<BuildT, Vec4f>::value) {
850  return GridType::Vec4f;
851  } else if constexpr(util::is_same<BuildT, Vec4d>::value) {
852  return GridType::Vec4d;
853  } else if constexpr(util::is_same<BuildT, Point>::value) {
854  return GridType::PointIndex;
855  } else if constexpr(util::is_same<BuildT, Vec3u8>::value) {
856  return GridType::Vec3u8;
857  } else if constexpr(util::is_same<BuildT, Vec3u16>::value) {
858  return GridType::Vec3u16;
859  } else if constexpr(util::is_same<BuildT, uint8_t>::value) {
860  return GridType::UInt8;
861  }
862  return GridType::Unknown;
863 }// toGridType
864 
865 template<typename BuildT>
866 [[deprecated("Use toGridType<T>() instead.")]]
867 __hostdev__ inline GridType mapToGridType(){return toGridType<BuildT>();}
868 
869 // ----------------------------> mapping BuildType -> GridClass <--------------------------------------
870 
871 /// @brief Maps from a templated build type to a GridClass enum
872 template<typename BuildT>
874 {
876  return GridClass::Topology;
877  } else if constexpr(BuildTraits<BuildT>::is_index) {
878  return GridClass::IndexGrid;
879  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
880  return GridClass::VoxelVolume;
881  } else if constexpr(util::is_same<BuildT, Point>::value) {
882  return GridClass::PointIndex;
883  }
884  return defaultClass;
885 }
886 
887 template<typename BuildT>
888 [[deprecated("Use toGridClass<T>() instead.")]]
890 {
891  return toGridClass<BuildT>();
892 }
893 
894 // ----------------------------> BitFlags <--------------------------------------
895 
896 template<int N>
897 struct BitArray;
898 template<>
899 struct BitArray<8>
900 {
901  uint8_t mFlags{0};
902 };
903 template<>
904 struct BitArray<16>
905 {
906  uint16_t mFlags{0};
907 };
908 template<>
909 struct BitArray<32>
910 {
911  uint32_t mFlags{0};
912 };
913 template<>
914 struct BitArray<64>
915 {
916  uint64_t mFlags{0};
917 };
918 
919 template<int N>
920 class BitFlags : public BitArray<N>
921 {
922 protected:
923  using BitArray<N>::mFlags;
924 
925 public:
926  using Type = decltype(mFlags);
927  BitFlags() {}
928  BitFlags(Type mask) : BitArray<N>{mask} {}
929  BitFlags(std::initializer_list<uint8_t> list)
930  {
931  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
932  }
933  template<typename MaskT>
934  BitFlags(std::initializer_list<MaskT> list)
935  {
936  for (auto mask : list) mFlags |= static_cast<Type>(mask);
937  }
938  __hostdev__ Type data() const { return mFlags; }
939  __hostdev__ Type& data() { return mFlags; }
940  __hostdev__ void initBit(std::initializer_list<uint8_t> list)
941  {
942  mFlags = 0u;
943  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
944  }
945  template<typename MaskT>
946  __hostdev__ void initMask(std::initializer_list<MaskT> list)
947  {
948  mFlags = 0u;
949  for (auto mask : list) mFlags |= static_cast<Type>(mask);
950  }
951  __hostdev__ Type getFlags() const { return mFlags & (static_cast<Type>(GridFlags::End) - 1u); } // mask out everything except relevant bits
952 
953  __hostdev__ void setOn() { mFlags = ~Type(0u); }
954  __hostdev__ void setOff() { mFlags = Type(0u); }
955 
956  __hostdev__ void setBitOn(uint8_t bit) { mFlags |= static_cast<Type>(1 << bit); }
957  __hostdev__ void setBitOff(uint8_t bit) { mFlags &= ~static_cast<Type>(1 << bit); }
958 
959  __hostdev__ void setBitOn(std::initializer_list<uint8_t> list)
960  {
961  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
962  }
963  __hostdev__ void setBitOff(std::initializer_list<uint8_t> list)
964  {
965  for (auto bit : list) mFlags &= ~static_cast<Type>(1 << bit);
966  }
967 
968  template<typename MaskT>
969  __hostdev__ void setMaskOn(MaskT mask) { mFlags |= static_cast<Type>(mask); }
970  template<typename MaskT>
971  __hostdev__ void setMaskOff(MaskT mask) { mFlags &= ~static_cast<Type>(mask); }
972 
973  template<typename MaskT>
974  __hostdev__ void setMaskOn(std::initializer_list<MaskT> list)
975  {
976  for (auto mask : list) mFlags |= static_cast<Type>(mask);
977  }
978  template<typename MaskT>
979  __hostdev__ void setMaskOff(std::initializer_list<MaskT> list)
980  {
981  for (auto mask : list) mFlags &= ~static_cast<Type>(mask);
982  }
983 
984  __hostdev__ void setBit(uint8_t bit, bool on) { on ? this->setBitOn(bit) : this->setBitOff(bit); }
985  template<typename MaskT>
986  __hostdev__ void setMask(MaskT mask, bool on) { on ? this->setMaskOn(mask) : this->setMaskOff(mask); }
987 
988  __hostdev__ bool isOn() const { return mFlags == ~Type(0u); }
989  __hostdev__ bool isOff() const { return mFlags == Type(0u); }
990  __hostdev__ bool isBitOn(uint8_t bit) const { return 0 != (mFlags & static_cast<Type>(1 << bit)); }
991  __hostdev__ bool isBitOff(uint8_t bit) const { return 0 == (mFlags & static_cast<Type>(1 << bit)); }
992  template<typename MaskT>
993  __hostdev__ bool isMaskOn(MaskT mask) const { return 0 != (mFlags & static_cast<Type>(mask)); }
994  template<typename MaskT>
995  __hostdev__ bool isMaskOff(MaskT mask) const { return 0 == (mFlags & static_cast<Type>(mask)); }
996  /// @brief return true if any of the masks in the list are on
997  template<typename MaskT>
998  __hostdev__ bool isMaskOn(std::initializer_list<MaskT> list) const
999  {
1000  for (auto mask : list) {
1001  if (0 != (mFlags & static_cast<Type>(mask))) return true;
1002  }
1003  return false;
1004  }
1005  /// @brief return true if any of the masks in the list are off
1006  template<typename MaskT>
1007  __hostdev__ bool isMaskOff(std::initializer_list<MaskT> list) const
1008  {
1009  for (auto mask : list) {
1010  if (0 == (mFlags & static_cast<Type>(mask))) return true;
1011  }
1012  return false;
1013  }
1014  /// @brief required for backwards compatibility
1015  __hostdev__ BitFlags& operator=(Type n)
1016  {
1017  mFlags = n;
1018  return *this;
1019  }
1020 }; // BitFlags<N>
1021 
1022 // ----------------------------> Mask <--------------------------------------
1023 
1024 /// @brief Bit-mask to encode active states and facilitate sequential iterators
1025 /// and a fast codec for I/O compression.
1026 template<uint32_t LOG2DIM>
1027 class Mask
1028 {
1029 public:
1030  static constexpr uint32_t SIZE = 1U << (3 * LOG2DIM); // Number of bits in mask
1031  static constexpr uint32_t WORD_COUNT = SIZE >> 6; // Number of 64 bit words
1032 
1033  /// @brief Return the memory footprint in bytes of this Mask
1034  __hostdev__ static size_t memUsage() { return sizeof(Mask); }
1035 
1036  /// @brief Return the number of bits available in this Mask
1037  __hostdev__ static uint32_t bitCount() { return SIZE; }
1038 
1039  /// @brief Return the number of machine words used by this Mask
1040  __hostdev__ static uint32_t wordCount() { return WORD_COUNT; }
1041 
1042  /// @brief Return the total number of set bits in this Mask
1043  __hostdev__ uint32_t countOn() const
1044  {
1045  uint32_t sum = 0;
1046  for (const uint64_t *w = mWords, *q = w + WORD_COUNT; w != q; ++w)
1047  sum += util::countOn(*w);
1048  return sum;
1049  }
1050 
1051  /// @brief Return the number of lower set bits in mask up to but excluding the i'th bit
1052  inline __hostdev__ uint32_t countOn(uint32_t i) const
1053  {
1054  uint32_t n = i >> 6, sum = util::countOn(mWords[n] & ((uint64_t(1) << (i & 63u)) - 1u));
1055  for (const uint64_t* w = mWords; n--; ++w)
1056  sum += util::countOn(*w);
1057  return sum;
1058  }
1059 
1060  template<bool On>
1061  class Iterator
1062  {
1063  public:
1065  : mPos(Mask::SIZE)
1066  , mParent(nullptr)
1067  {
1068  }
1069  __hostdev__ Iterator(uint32_t pos, const Mask* parent)
1070  : mPos(pos)
1071  , mParent(parent)
1072  {
1073  }
1074  Iterator& operator=(const Iterator&) = default;
1075  __hostdev__ uint32_t operator*() const { return mPos; }
1076  __hostdev__ uint32_t pos() const { return mPos; }
1077  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1079  {
1080  mPos = mParent->findNext<On>(mPos + 1);
1081  return *this;
1082  }
1084  {
1085  auto tmp = *this;
1086  ++(*this);
1087  return tmp;
1088  }
1089 
1090  private:
1091  uint32_t mPos;
1092  const Mask* mParent;
1093  }; // Member class Iterator
1094 
1096  {
1097  public:
1099  : mPos(pos)
1100  {
1101  }
1102  DenseIterator& operator=(const DenseIterator&) = default;
1103  __hostdev__ uint32_t operator*() const { return mPos; }
1104  __hostdev__ uint32_t pos() const { return mPos; }
1105  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1107  {
1108  ++mPos;
1109  return *this;
1110  }
1112  {
1113  auto tmp = *this;
1114  ++mPos;
1115  return tmp;
1116  }
1117 
1118  private:
1119  uint32_t mPos;
1120  }; // Member class DenseIterator
1121 
1124 
1125  __hostdev__ OnIterator beginOn() const { return OnIterator(this->findFirst<true>(), this); }
1126 
1127  __hostdev__ OffIterator beginOff() const { return OffIterator(this->findFirst<false>(), this); }
1128 
1130 
1131  /// @brief Initialize all bits to zero.
1133  {
1134  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1135  mWords[i] = 0;
1136  }
1138  {
1139  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1140  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1141  mWords[i] = v;
1142  }
1143 
1144  /// @brief Copy constructor
1145  __hostdev__ Mask(const Mask& other)
1146  {
1147  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1148  mWords[i] = other.mWords[i];
1149  }
1150 
1151  /// @brief Return a pointer to the list of words of the bit mask
1152  __hostdev__ uint64_t* words() { return mWords; }
1153  __hostdev__ const uint64_t* words() const { return mWords; }
1154 
1155  template<typename WordT>
1156  __hostdev__ WordT getWord(uint32_t n) const
1157  {
1159  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1160  return reinterpret_cast<WordT*>(mWords)[n];
1161  }
1162  template<typename WordT>
1163  __hostdev__ void setWord(WordT w, uint32_t n)
1164  {
1166  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1167  reinterpret_cast<WordT*>(mWords)[n] = w;
1168  }
1169 
1170  /// @brief Assignment operator that works with openvdb::util::NodeMask
1171  template<typename MaskT = Mask>
1173  {
1174  static_assert(sizeof(Mask) == sizeof(MaskT), "Mismatching sizeof");
1175  static_assert(WORD_COUNT == MaskT::WORD_COUNT, "Mismatching word count");
1176  static_assert(LOG2DIM == MaskT::LOG2DIM, "Mismatching LOG2DIM");
1177  auto* src = reinterpret_cast<const uint64_t*>(&other);
1178  for (uint64_t *dst = mWords, *end = dst + WORD_COUNT; dst != end; ++dst)
1179  *dst = *src++;
1180  return *this;
1181  }
1182 
1183  //__hostdev__ Mask& operator=(const Mask& other){return *util::memcpy(this, &other);}
1184  Mask& operator=(const Mask&) = default;
1185 
1186  __hostdev__ bool operator==(const Mask& other) const
1187  {
1188  for (uint32_t i = 0; i < WORD_COUNT; ++i) {
1189  if (mWords[i] != other.mWords[i])
1190  return false;
1191  }
1192  return true;
1193  }
1194 
1195  __hostdev__ bool operator!=(const Mask& other) const { return !((*this) == other); }
1196 
1197  /// @brief Return true if the given bit is set.
1198  __hostdev__ bool isOn(uint32_t n) const { return 0 != (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1199 
1200  /// @brief Return true if the given bit is NOT set.
1201  __hostdev__ bool isOff(uint32_t n) const { return 0 == (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1202 
1203  /// @brief Return true if all the bits are set in this Mask.
1204  __hostdev__ bool isOn() const
1205  {
1206  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1207  if (mWords[i] != ~uint64_t(0))
1208  return false;
1209  return true;
1210  }
1211 
1212  /// @brief Return true if none of the bits are set in this Mask.
1213  __hostdev__ bool isOff() const
1214  {
1215  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1216  if (mWords[i] != uint64_t(0))
1217  return false;
1218  return true;
1219  }
1220 
1221  /// @brief Set the specified bit on.
1222  __hostdev__ void setOn(uint32_t n) { mWords[n >> 6] |= uint64_t(1) << (n & 63); }
1223  /// @brief Set the specified bit off.
1224  __hostdev__ void setOff(uint32_t n) { mWords[n >> 6] &= ~(uint64_t(1) << (n & 63)); }
1225 
1226 #if defined(__CUDACC__) // the following functions only run on the GPU!
1227  __device__ inline void setOnAtomic(uint32_t n)
1228  {
1229  atomicOr(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), 1ull << (n & 63));
1230  }
1231  __device__ inline void setOffAtomic(uint32_t n)
1232  {
1233  atomicAnd(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), ~(1ull << (n & 63)));
1234  }
1235  __device__ inline void setAtomic(uint32_t n, bool on)
1236  {
1237  on ? this->setOnAtomic(n) : this->setOffAtomic(n);
1238  }
1239 /*
1240  template<typename WordT>
1241  __device__ inline void setWordAtomic(WordT w, uint32_t n)
1242  {
1243  static_assert(util::is_same<WordT, uint8_t, uint16_t, uint32_t, uint64_t>::value);
1244  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1245  if constexpr(util::is_same<WordT,uint8_t>::value) {
1246  mask <<= x;
1247  } else if constexpr(util::is_same<WordT,uint16_t>::value) {
1248  unsigned int mask = w;
1249  if (n >> 1) mask <<= 16;
1250  atomicOr(reinterpret_cast<unsigned int*>(this) + n, mask);
1251  } else if constexpr(util::is_same<WordT,uint32_t>::value) {
1252  atomicOr(reinterpret_cast<unsigned int*>(this) + n, w);
1253  } else {
1254  atomicOr(reinterpret_cast<unsigned long long int*>(this) + n, w);
1255  }
1256  }
1257 */
1258 #endif
1259  /// @brief Set the specified bit on or off.
1260  __hostdev__ void set(uint32_t n, bool on)
1261  {
1262 #if 1 // switch between branchless
1263  auto& word = mWords[n >> 6];
1264  n &= 63;
1265  word &= ~(uint64_t(1) << n);
1266  word |= uint64_t(on) << n;
1267 #else
1268  on ? this->setOn(n) : this->setOff(n);
1269 #endif
1270  }
1271 
1272  /// @brief Set all bits on
1274  {
1275  for (uint32_t i = 0; i < WORD_COUNT; ++i)mWords[i] = ~uint64_t(0);
1276  }
1277 
1278  /// @brief Set all bits off
1280  {
1281  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = uint64_t(0);
1282  }
1283 
1284  /// @brief Set all bits off
1285  __hostdev__ void set(bool on)
1286  {
1287  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1288  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = v;
1289  }
1290  /// brief Toggle the state of all bits in the mask
1292  {
1293  uint32_t n = WORD_COUNT;
1294  for (auto* w = mWords; n--; ++w) *w = ~*w;
1295  }
1296  __hostdev__ void toggle(uint32_t n) { mWords[n >> 6] ^= uint64_t(1) << (n & 63); }
1297 
1298  /// @brief Bitwise intersection
1300  {
1301  uint64_t* w1 = mWords;
1302  const uint64_t* w2 = other.mWords;
1303  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= *w2;
1304  return *this;
1305  }
1306  /// @brief Bitwise union
1308  {
1309  uint64_t* w1 = mWords;
1310  const uint64_t* w2 = other.mWords;
1311  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 |= *w2;
1312  return *this;
1313  }
1314  /// @brief Bitwise difference
1316  {
1317  uint64_t* w1 = mWords;
1318  const uint64_t* w2 = other.mWords;
1319  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= ~*w2;
1320  return *this;
1321  }
1322  /// @brief Bitwise XOR
1324  {
1325  uint64_t* w1 = mWords;
1326  const uint64_t* w2 = other.mWords;
1327  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 ^= *w2;
1328  return *this;
1329  }
1330 
1332  template<bool ON>
1333  __hostdev__ uint32_t findFirst() const
1334  {
1335  uint32_t n = 0u;
1336  const uint64_t* w = mWords;
1337  for (; n < WORD_COUNT && !(ON ? *w : ~*w); ++w, ++n);
1338  return n < WORD_COUNT ? (n << 6) + util::findLowestOn(ON ? *w : ~*w) : SIZE;
1339  }
1340 
1342  template<bool ON>
1343  __hostdev__ uint32_t findNext(uint32_t start) const
1344  {
1345  uint32_t n = start >> 6; // initiate
1346  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1347  uint32_t m = start & 63u;
1348  uint64_t b = ON ? mWords[n] : ~mWords[n];
1349  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1350  b &= ~uint64_t(0u) << m; // mask out lower bits
1351  while (!b && ++n < WORD_COUNT) b = ON ? mWords[n] : ~mWords[n]; // find next non-zero word
1352  return b ? (n << 6) + util::findLowestOn(b) : SIZE; // catch last word=0
1353  }
1354 
1356  template<bool ON>
1357  __hostdev__ uint32_t findPrev(uint32_t start) const
1358  {
1359  uint32_t n = start >> 6; // initiate
1360  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1361  uint32_t m = start & 63u;
1362  uint64_t b = ON ? mWords[n] : ~mWords[n];
1363  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1364  b &= (uint64_t(1u) << m) - 1u; // mask out higher bits
1365  while (!b && n) b = ON ? mWords[--n] : ~mWords[--n]; // find previous non-zero word
1366  return b ? (n << 6) + util::findHighestOn(b) : SIZE; // catch first word=0
1367  }
1368 
1369 private:
1370  uint64_t mWords[WORD_COUNT];
1371 }; // Mask class
1372 
1373 // ----------------------------> Map <--------------------------------------
1374 
1375 /// @brief Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation
1376 struct Map
1377 { // 264B (not 32B aligned!)
1378  float mMatF[9]; // 9*4B <- 3x3 matrix
1379  float mInvMatF[9]; // 9*4B <- 3x3 matrix
1380  float mVecF[3]; // 3*4B <- translation
1381  float mTaperF; // 4B, placeholder for taper value
1382  double mMatD[9]; // 9*8B <- 3x3 matrix
1383  double mInvMatD[9]; // 9*8B <- 3x3 matrix
1384  double mVecD[3]; // 3*8B <- translation
1385  double mTaperD; // 8B, placeholder for taper value
1386 
1387  /// @brief Default constructor for the identity map
1389  : mMatF{ 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1390  , mInvMatF{1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1391  , mVecF{0.0f, 0.0f, 0.0f}
1392  , mTaperF{1.0f}
1393  , mMatD{ 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1394  , mInvMatD{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1395  , mVecD{0.0, 0.0, 0.0}
1396  , mTaperD{1.0}
1397  {
1398  }
1399  __hostdev__ Map(double s, const Vec3d& t = Vec3d(0.0, 0.0, 0.0))
1400  : mMatF{float(s), 0.0f, 0.0f, 0.0f, float(s), 0.0f, 0.0f, 0.0f, float(s)}
1401  , mInvMatF{1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s)}
1402  , mVecF{float(t[0]), float(t[1]), float(t[2])}
1403  , mTaperF{1.0f}
1404  , mMatD{s, 0.0, 0.0, 0.0, s, 0.0, 0.0, 0.0, s}
1405  , mInvMatD{1.0 / s, 0.0, 0.0, 0.0, 1.0 / s, 0.0, 0.0, 0.0, 1.0 / s}
1406  , mVecD{t[0], t[1], t[2]}
1407  , mTaperD{1.0}
1408  {
1409  }
1410 
1411  /// @brief Initialize the member data from 3x3 or 4x4 matrices
1412  /// @note This is not _hostdev__ since then MatT=openvdb::Mat4d will produce warnings
1413  template<typename MatT, typename Vec3T>
1414  void set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper = 1.0);
1415 
1416  /// @brief Initialize the member data from 4x4 matrices
1417  /// @note The last (4th) row of invMat is actually ignored.
1418  /// This is not _hostdev__ since then Mat4T=openvdb::Mat4d will produce warnings
1419  template<typename Mat4T>
1420  void set(const Mat4T& mat, const Mat4T& invMat, double taper = 1.0) { this->set(mat, invMat, mat[3], taper); }
1421 
1422  template<typename Vec3T>
1423  void set(double scale, const Vec3T& translation, double taper = 1.0);
1424 
1425  /// @brief Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
1426  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1427  /// @tparam Vec3T Template type of the 3D vector to be mapped
1428  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1429  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1430  template<typename Vec3T>
1431  __hostdev__ Vec3T applyMap(const Vec3T& ijk) const { return math::matMult(mMatD, mVecD, ijk); }
1432 
1433  /// @brief Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
1434  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1435  /// @tparam Vec3T Template type of the 3D vector to be mapped
1436  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1437  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1438  template<typename Vec3T>
1439  __hostdev__ Vec3T applyMapF(const Vec3T& ijk) const { return math::matMult(mMatF, mVecF, ijk); }
1440 
1441  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1442  /// e.g. scale and rotation WITHOUT translation.
1443  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1444  /// @tparam Vec3T Template type of the 3D vector to be mapped
1445  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1446  /// @return linear forward 3x3 mapping of the input vector
1447  template<typename Vec3T>
1448  __hostdev__ Vec3T applyJacobian(const Vec3T& ijk) const { return math::matMult(mMatD, ijk); }
1449 
1450  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1451  /// e.g. scale and rotation WITHOUT translation.
1452  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1453  /// @tparam Vec3T Template type of the 3D vector to be mapped
1454  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1455  /// @return linear forward 3x3 mapping of the input vector
1456  template<typename Vec3T>
1457  __hostdev__ Vec3T applyJacobianF(const Vec3T& ijk) const { return math::matMult(mMatF, ijk); }
1458 
1459  /// @brief Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
1460  /// @note Typically this operation is used for the world -> index mapping
1461  /// @tparam Vec3T Template type of the 3D vector to be mapped
1462  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1463  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1464  template<typename Vec3T>
1465  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const
1466  {
1467  return math::matMult(mInvMatD, Vec3T(xyz[0] - mVecD[0], xyz[1] - mVecD[1], xyz[2] - mVecD[2]));
1468  }
1469 
1470  /// @brief Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
1471  /// @note Typically this operation is used for the world -> index mapping
1472  /// @tparam Vec3T Template type of the 3D vector to be mapped
1473  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1474  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1475  template<typename Vec3T>
1476  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const
1477  {
1478  return math::matMult(mInvMatF, Vec3T(xyz[0] - mVecF[0], xyz[1] - mVecF[1], xyz[2] - mVecF[2]));
1479  }
1480 
1481  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1482  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1483  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1484  /// @tparam Vec3T Template type of the 3D vector to be mapped
1485  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1486  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1487  template<typename Vec3T>
1488  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return math::matMult(mInvMatD, xyz); }
1489 
1490  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1491  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1492  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1493  /// @tparam Vec3T Template type of the 3D vector to be mapped
1494  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1495  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1496  template<typename Vec3T>
1497  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return math::matMult(mInvMatF, xyz); }
1498 
1499  /// @brief Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1500  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1501  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1502  /// @tparam Vec3T Template type of the 3D vector to be mapped
1503  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1504  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1505  template<typename Vec3T>
1506  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return math::matMultT(mInvMatD, xyz); }
1507  template<typename Vec3T>
1508  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return math::matMultT(mInvMatF, xyz); }
1509 
1510  /// @brief Return a voxels size in each coordinate direction, measured at the origin
1511  __hostdev__ Vec3d getVoxelSize() const { return this->applyMap(Vec3d(1)) - this->applyMap(Vec3d(0)); }
1512 }; // Map
1513 
1514 template<typename MatT, typename Vec3T>
1515 inline void Map::set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper)
1516 {
1517  float * mf = mMatF, *vf = mVecF, *mif = mInvMatF;
1518  double *md = mMatD, *vd = mVecD, *mid = mInvMatD;
1519  mTaperF = static_cast<float>(taper);
1520  mTaperD = taper;
1521  for (int i = 0; i < 3; ++i) {
1522  *vd++ = translate[i]; //translation
1523  *vf++ = static_cast<float>(translate[i]); //translation
1524  for (int j = 0; j < 3; ++j) {
1525  *md++ = mat[j][i]; //transposed
1526  *mid++ = invMat[j][i];
1527  *mf++ = static_cast<float>(mat[j][i]); //transposed
1528  *mif++ = static_cast<float>(invMat[j][i]);
1529  }
1530  }
1531 }
1532 
1533 template<typename Vec3T>
1534 inline void Map::set(double dx, const Vec3T& trans, double taper)
1535 {
1536  NANOVDB_ASSERT(dx > 0.0);
1537  const double mat[3][3] = { {dx, 0.0, 0.0}, // row 0
1538  {0.0, dx, 0.0}, // row 1
1539  {0.0, 0.0, dx} }; // row 2
1540  const double idx = 1.0 / dx;
1541  const double invMat[3][3] = { {idx, 0.0, 0.0}, // row 0
1542  {0.0, idx, 0.0}, // row 1
1543  {0.0, 0.0, idx} }; // row 2
1544  this->set(mat, invMat, trans, taper);
1545 }
1546 
1547 // ----------------------------> GridBlindMetaData <--------------------------------------
1548 
1549 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridBlindMetaData
1550 { // 288 bytes
1551  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less!
1552  int64_t mDataOffset; // byte offset to the blind data, relative to GridBlindMetaData::this.
1553  uint64_t mValueCount; // number of blind values, e.g. point count
1554  uint32_t mValueSize;// byte size of each value, e.g. 4 if mDataType=Float and 1 if mDataType=Unknown since that amounts to char
1555  GridBlindDataSemantic mSemantic; // semantic meaning of the data.
1557  GridType mDataType; // 4 bytes
1558  char mName[MaxNameSize]; // note this includes the NULL termination
1559  // no padding required for 32 byte alignment
1560 
1561  /// @brief Empty constructor
1563  : mDataOffset(0)
1564  , mValueCount(0)
1565  , mValueSize(0)
1566  , mSemantic(GridBlindDataSemantic::Unknown)
1567  , mDataClass(GridBlindDataClass::Unknown)
1568  , mDataType(GridType::Unknown)
1569  {
1570  util::memzero(mName, MaxNameSize);
1571  }
1572 
1573  GridBlindMetaData(int64_t dataOffset, uint64_t valueCount, uint32_t valueSize, GridBlindDataSemantic semantic, GridBlindDataClass dataClass, GridType dataType)
1574  : mDataOffset(dataOffset)
1575  , mValueCount(valueCount)
1576  , mValueSize(valueSize)
1577  , mSemantic(semantic)
1578  , mDataClass(dataClass)
1579  , mDataType(dataType)
1580  {
1581  util::memzero(mName, MaxNameSize);
1582  }
1583 
1584  /// @brief Copy constructor that resets mDataOffset and zeros out mName
1586  : mDataOffset(util::PtrDiff(util::PtrAdd(&other, other.mDataOffset), this))
1587  , mValueCount(other.mValueCount)
1588  , mValueSize(other.mValueSize)
1589  , mSemantic(other.mSemantic)
1590  , mDataClass(other.mDataClass)
1591  , mDataType(other.mDataType)
1592  {
1593  util::strncpy(mName, other.mName, MaxNameSize);
1594  }
1595 
1596  /// @brief Copy assignment operator that resets mDataOffset and copies mName
1597  /// @param rhs right-hand instance to copy
1598  /// @return reference to itself
1600  {
1601  mDataOffset = util::PtrDiff(util::PtrAdd(&rhs, rhs.mDataOffset), this);
1602  mValueCount = rhs.mValueCount;
1603  mValueSize = rhs. mValueSize;
1604  mSemantic = rhs.mSemantic;
1605  mDataClass = rhs.mDataClass;
1606  mDataType = rhs.mDataType;
1607  util::strncpy(mName, rhs.mName, MaxNameSize);
1608  return *this;
1609  }
1610 
1611  __hostdev__ void setBlindData(const void* blindData)
1612  {
1613  mDataOffset = util::PtrDiff(blindData, this);
1614  }
1615 
1616  /// @brief Sets the name string
1617  /// @param name c-string source name
1618  /// @return returns false if @c name has too many characters
1619  __hostdev__ bool setName(const char* name){return util::strncpy(mName, name, MaxNameSize)[MaxNameSize-1] == '\0';}
1620 
1621  /// @brief returns a const void point to the blind data
1622  /// @note assumes that setBlinddData was called
1623  __hostdev__ const void* blindData() const
1624  {
1625  NANOVDB_ASSERT(mDataOffset != 0);
1626  return util::PtrAdd(this, mDataOffset);
1627  }
1628 
1629  /// @brief Get a const pointer to the blind data represented by this meta data
1630  /// @tparam BlindDataT Expected value type of the blind data.
1631  /// @return Returns NULL if mGridType!=toGridType<BlindDataT>(), else a const point of type BlindDataT.
1632  /// @note Use mDataType=Unknown if BlindDataT is a custom data type unknown to NanoVDB.
1633  template<typename BlindDataT>
1634  __hostdev__ const BlindDataT* getBlindData() const
1635  {
1636  return mDataOffset && (mDataType == toGridType<BlindDataT>()) ? util::PtrAdd<BlindDataT>(this, mDataOffset) : nullptr;
1637  }
1638 
1639  /// @brief return true if this meta data has a valid combination of semantic, class and value tags
1640  /// @note this does not check if the mDataOffset has been set!
1641  __hostdev__ bool isValid() const
1642  {
1643  auto check = [&]()->bool{
1644  switch (mDataType){
1645  case GridType::Unknown: return mValueSize==1u;// i.e. we encode data as mValueCount chars
1646  case GridType::Float: return mValueSize==4u;
1647  case GridType::Double: return mValueSize==8u;
1648  case GridType::Int16: return mValueSize==2u;
1649  case GridType::Int32: return mValueSize==4u;
1650  case GridType::Int64: return mValueSize==8u;
1651  case GridType::Vec3f: return mValueSize==12u;
1652  case GridType::Vec3d: return mValueSize==24u;
1653  case GridType::Half: return mValueSize==2u;
1654  case GridType::RGBA8: return mValueSize==4u;
1655  case GridType::Fp8: return mValueSize==1u;
1656  case GridType::Fp16: return mValueSize==2u;
1657  case GridType::Vec4f: return mValueSize==16u;
1658  case GridType::Vec4d: return mValueSize==32u;
1659  case GridType::Vec3u8: return mValueSize==3u;
1660  case GridType::Vec3u16: return mValueSize==6u;
1661  default: return true;}// all other combinations are valid
1662  };
1663  return nanovdb::isValid(mDataClass, mSemantic, mDataType) && check();
1664  }
1665 
1666  /// @brief return size in bytes of the blind data represented by this blind meta data
1667  /// @note This size includes possible padding for 32 byte alignment. The actual amount
1668  /// of bind data is mValueCount * mValueSize
1669  __hostdev__ uint64_t blindDataSize() const
1670  {
1671  return math::AlignUp<NANOVDB_DATA_ALIGNMENT>(mValueCount * mValueSize);
1672  }
1673 }; // GridBlindMetaData
1674 
1675 // ----------------------------> NodeTrait <--------------------------------------
1676 
1677 /// @brief Struct to derive node type from its level in a given
1678 /// grid, tree or root while preserving constness
1679 template<typename GridOrTreeOrRootT, int LEVEL>
1680 struct NodeTrait;
1681 
1682 // Partial template specialization of above Node struct
1683 template<typename GridOrTreeOrRootT>
1684 struct NodeTrait<GridOrTreeOrRootT, 0>
1685 {
1686  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1687  using Type = typename GridOrTreeOrRootT::LeafNodeType;
1688  using type = typename GridOrTreeOrRootT::LeafNodeType;
1689 };
1690 template<typename GridOrTreeOrRootT>
1691 struct NodeTrait<const GridOrTreeOrRootT, 0>
1692 {
1693  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1694  using Type = const typename GridOrTreeOrRootT::LeafNodeType;
1695  using type = const typename GridOrTreeOrRootT::LeafNodeType;
1696 };
1697 
1698 template<typename GridOrTreeOrRootT>
1699 struct NodeTrait<GridOrTreeOrRootT, 1>
1700 {
1701  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1702  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1703  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1704 };
1705 template<typename GridOrTreeOrRootT>
1706 struct NodeTrait<const GridOrTreeOrRootT, 1>
1707 {
1708  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1709  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1710  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1711 };
1712 template<typename GridOrTreeOrRootT>
1713 struct NodeTrait<GridOrTreeOrRootT, 2>
1714 {
1715  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1716  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1717  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1718 };
1719 template<typename GridOrTreeOrRootT>
1720 struct NodeTrait<const GridOrTreeOrRootT, 2>
1721 {
1722  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1723  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1724  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1725 };
1726 template<typename GridOrTreeOrRootT>
1727 struct NodeTrait<GridOrTreeOrRootT, 3>
1728 {
1729  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1730  using Type = typename GridOrTreeOrRootT::RootNodeType;
1731  using type = typename GridOrTreeOrRootT::RootNodeType;
1732 };
1733 
1734 template<typename GridOrTreeOrRootT>
1735 struct NodeTrait<const GridOrTreeOrRootT, 3>
1736 {
1737  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1738  using Type = const typename GridOrTreeOrRootT::RootNodeType;
1739  using type = const typename GridOrTreeOrRootT::RootNodeType;
1740 };
1741 
1742 // ------------> Froward decelerations of accelerated random access methods <---------------
1743 
1744 template<typename BuildT>
1745 struct GetValue;
1746 template<typename BuildT>
1747 struct SetValue;
1748 template<typename BuildT>
1749 struct SetVoxel;
1750 template<typename BuildT>
1751 struct GetState;
1752 template<typename BuildT>
1753 struct GetDim;
1754 template<typename BuildT>
1755 struct GetLeaf;
1756 template<typename BuildT>
1757 struct ProbeValue;
1758 template<typename BuildT>
1760 
1761 // ----------------------------> CheckMode <----------------------------------
1762 
1763 /// @brief List of different modes for computing for a checksum
1764 enum class CheckMode : uint32_t { Disable = 0, // no computation
1765  Empty = 0,
1766  Half = 1,
1767  Partial = 1, // fast but approximate
1768  Default = 1, // defaults to Partial
1769  Full = 2, // slow but accurate
1770  End = 3, // marks the end of the enum list
1771  StrLen = 9 + End};
1772 
1773 /// @brief Prints CheckMode enum to a c-string
1774 /// @param dst Destination c-string
1775 /// @param mode CheckMode enum to be converted to string
1776 /// @return destinations string @c dst
1777 __hostdev__ inline char* toStr(char *dst, CheckMode mode)
1778 {
1779  switch (mode){
1780  case CheckMode::Half: return util::strcpy(dst, "half");
1781  case CheckMode::Full: return util::strcpy(dst, "full");
1782  default: return util::strcpy(dst, "disabled");// StrLen = 8 + 1 + End
1783  }
1784 }
1785 
1786 // ----------------------------> Checksum <----------------------------------
1787 
1788 /// @brief Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data
1789 /// and one for the remaining grid nodes.
1791 {
1792  /// Three types of checksums:
1793  /// 1) Empty: all 64 bits are on (used to signify a disabled or undefined checksum)
1794  /// 2) Half: Upper 32 bits are on and not all of lower 32 bits are on (lower 32 bits checksum head of grid)
1795  /// 3) Full: Not all of the 64 bits are one (lower 32 bits checksum head of grid and upper 32 bits checksum tail of grid)
1796  union { uint32_t mCRC32[2]; uint64_t mCRC64; };// mCRC32[0] is checksum of Grid, Tree and Root, and mCRC32[1] is checksum of nodes
1797 
1798 public:
1799 
1800  static constexpr uint32_t EMPTY32 = ~uint32_t{0};
1801  static constexpr uint64_t EMPTY64 = ~uint64_t(0);
1802 
1803  /// @brief default constructor initiates checksum to EMPTY
1804  __hostdev__ Checksum() : mCRC64{EMPTY64} {}
1805 
1806  /// @brief Constructor that allows the two 32bit checksums to be initiated explicitly
1807  /// @param head Initial 32bit CRC checksum of grid, tree and root data
1808  /// @param tail Initial 32bit CRC checksum of all the nodes and blind data
1809  __hostdev__ Checksum(uint32_t head, uint32_t tail) : mCRC32{head, tail} {}
1810 
1811  /// @brief
1812  /// @param checksum
1813  /// @param mode
1814  __hostdev__ Checksum(uint64_t checksum, CheckMode mode = CheckMode::Full) : mCRC64{mode == CheckMode::Disable ? EMPTY64 : checksum}
1815  {
1816  if (mode == CheckMode::Partial) mCRC32[1] = EMPTY32;
1817  }
1818 
1819  /// @brief return the 64 bit checksum of this instance
1820  [[deprecated("Use Checksum::data instead.")]]
1821  __hostdev__ uint64_t checksum() const { return mCRC64; }
1822  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1823  __hostdev__ uint32_t& checksum(int i) {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1824  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1825  __hostdev__ uint32_t checksum(int i) const {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1826 
1827  __hostdev__ uint64_t full() const { return mCRC64; }
1828  __hostdev__ uint64_t& full() { return mCRC64; }
1829  __hostdev__ uint32_t head() const { return mCRC32[0]; }
1830  __hostdev__ uint32_t& head() { return mCRC32[0]; }
1831  __hostdev__ uint32_t tail() const { return mCRC32[1]; }
1832  __hostdev__ uint32_t& tail() { return mCRC32[1]; }
1833 
1834  /// @brief return true if the 64 bit checksum is partial, i.e. of head only
1835  [[deprecated("Use Checksum::isHalf instead.")]]
1836  __hostdev__ bool isPartial() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1837  __hostdev__ bool isHalf() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1838 
1839  /// @brief return true if the 64 bit checksum is fill, i.e. of both had and nodes
1840  __hostdev__ bool isFull() const { return mCRC64 != EMPTY64 && mCRC32[1] != EMPTY32; }
1841 
1842  /// @brief return true if the 64 bit checksum is disables (unset)
1843  __hostdev__ bool isEmpty() const { return mCRC64 == EMPTY64; }
1844 
1845  __hostdev__ void disable() { mCRC64 = EMPTY64; }
1846 
1847  /// @brief return the mode of the 64 bit checksum
1849  {
1850  return mCRC64 == EMPTY64 ? CheckMode::Disable :
1851  mCRC32[1] == EMPTY32 ? CheckMode::Partial : CheckMode::Full;
1852  }
1853 
1854  /// @brief return true if the checksums are identical
1855  /// @param rhs other Checksum
1856  __hostdev__ bool operator==(const Checksum &rhs) const {return mCRC64 == rhs.mCRC64;}
1857 
1858  /// @brief return true if the checksums are not identical
1859  /// @param rhs other Checksum
1860  __hostdev__ bool operator!=(const Checksum &rhs) const {return mCRC64 != rhs.mCRC64;}
1861 };// Checksum
1862 
1863 /// @brief Maps 64 bit checksum to CheckMode enum
1864 /// @param checksum 64 bit checksum with two CRC32 codes
1865 /// @return CheckMode enum
1866 __hostdev__ inline CheckMode toCheckMode(const Checksum &checksum){return checksum.mode();}
1867 
1868 // ----------------------------> Grid <--------------------------------------
1869 
1870 /*
1871  The following class and comment is for internal use only
1872 
1873  Memory layout:
1874 
1875  Grid -> 39 x double (world bbox and affine transformation)
1876  Tree -> Root 3 x ValueType + int32_t + N x Tiles (background,min,max,tileCount + tileCount x Tiles)
1877 
1878  N2 upper InternalNodes each with 2 bit masks, N2 tiles, and min/max values
1879 
1880  N1 lower InternalNodes each with 2 bit masks, N1 tiles, and min/max values
1881 
1882  N0 LeafNodes each with a bit mask, N0 ValueTypes and min/max
1883 
1884  Example layout: ("---" implies it has a custom offset, "..." implies zero or more)
1885  [GridData][TreeData]---[RootData][ROOT TILES...]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
1886 */
1887 
1888 /// @brief Struct with all the member data of the Grid (useful during serialization of an openvdb grid)
1889 ///
1890 /// @note The transform is assumed to be affine (so linear) and have uniform scale! So frustum transforms
1891 /// and non-uniform scaling are not supported (primarily because they complicate ray-tracing in index space)
1892 ///
1893 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
1894 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridData
1895 { // sizeof(GridData) = 672B
1896  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less
1897  uint64_t mMagic; // 8B (0) magic to validate it is valid grid data.
1898  Checksum mChecksum; // 8B (8). Checksum of grid buffer.
1899  Version mVersion; // 4B (16) major, minor, and patch version numbers
1900  BitFlags<32> mFlags; // 4B (20). flags for grid.
1901  uint32_t mGridIndex; // 4B (24). Index of this grid in the buffer
1902  uint32_t mGridCount; // 4B (28). Total number of grids in the buffer
1903  uint64_t mGridSize; // 8B (32). byte count of this entire grid occupied in the buffer.
1904  char mGridName[MaxNameSize]; // 256B (40)
1905  Map mMap; // 264B (296). affine transformation between index and world space in both single and double precision
1906  Vec3dBBox mWorldBBox; // 48B (560). floating-point AABB of active values in WORLD SPACE (2 x 3 doubles)
1907  Vec3d mVoxelSize; // 24B (608). size of a voxel in world units
1908  GridClass mGridClass; // 4B (632).
1909  GridType mGridType; // 4B (636).
1910  int64_t mBlindMetadataOffset; // 8B (640). offset to beginning of GridBlindMetaData structures that follow this grid.
1911  uint32_t mBlindMetadataCount; // 4B (648). count of GridBlindMetaData structures that follow this grid.
1912  uint32_t mData0; // 4B (652) unused
1913  uint64_t mData1; // 8B (656) is use for the total number of values indexed by an IndexGrid
1914  uint64_t mData2; // 8B (664) padding to 32 B alignment
1915  /// @brief Use this method to initiate most member data
1916  GridData& operator=(const GridData&) = default;
1917  //__hostdev__ GridData& operator=(const GridData& other){return *util::memcpy(this, &other);}
1918  __hostdev__ void init(std::initializer_list<GridFlags> list = {GridFlags::IsBreadthFirst},
1919  uint64_t gridSize = 0u,
1920  const Map& map = Map(),
1921  GridType gridType = GridType::Unknown,
1922  GridClass gridClass = GridClass::Unknown)
1923  {
1924 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1925  mMagic = NANOVDB_MAGIC_GRID;
1926 #else
1927  mMagic = NANOVDB_MAGIC_NUMB;
1928 #endif
1929  mChecksum.disable();// all 64 bits ON means checksum is disabled
1930  mVersion = Version();
1931  mFlags.initMask(list);
1932  mGridIndex = 0u;
1933  mGridCount = 1u;
1934  mGridSize = gridSize;
1935  mGridName[0] = '\0';
1936  mMap = map;
1937  mWorldBBox = Vec3dBBox();// invalid bbox
1938  mVoxelSize = map.getVoxelSize();
1939  mGridClass = gridClass;
1940  mGridType = gridType;
1941  mBlindMetadataOffset = mGridSize; // i.e. no blind data
1942  mBlindMetadataCount = 0u; // i.e. no blind data
1943  mData0 = 0u; // zero padding
1944  mData1 = 0u; // only used for index and point grids
1945 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1946  mData2 = 0u;// unused
1947 #else
1948  mData2 = NANOVDB_MAGIC_GRID; // since version 32.6.0 (will change in the future)
1949 #endif
1950  }
1951  /// @brief return true if the magic number and the version are both valid
1952  __hostdev__ bool isValid() const {
1953  // Before v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and mData2 was undefined
1954  // For v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and toMagic(mData2) = MagicType::NanoGrid
1955  // After v32.7.X: toMagic(mMagic) = MagicType::NanoGrid and mData2 will again be undefined
1956  const MagicType magic = toMagic(mMagic);
1957  if (magic == MagicType::NanoGrid || toMagic(mData2) == MagicType::NanoGrid) return true;
1958  bool test = magic == MagicType::NanoVDB;// could be GridData or io::FileHeader
1959  if (test) test = mVersion.isCompatible();
1960  if (test) test = mGridCount > 0u && mGridIndex < mGridCount;
1961  if (test) test = mGridClass < GridClass::End && mGridType < GridType::End;
1962  return test;
1963  }
1964  // Set and unset various bit flags
1965  __hostdev__ void setMinMaxOn(bool on = true) { mFlags.setMask(GridFlags::HasMinMax, on); }
1966  __hostdev__ void setBBoxOn(bool on = true) { mFlags.setMask(GridFlags::HasBBox, on); }
1967  __hostdev__ void setLongGridNameOn(bool on = true) { mFlags.setMask(GridFlags::HasLongGridName, on); }
1968  __hostdev__ void setAverageOn(bool on = true) { mFlags.setMask(GridFlags::HasAverage, on); }
1969  __hostdev__ void setStdDeviationOn(bool on = true) { mFlags.setMask(GridFlags::HasStdDeviation, on); }
1970  __hostdev__ bool setGridName(const char* src)
1971  {
1972  const bool success = (util::strncpy(mGridName, src, MaxNameSize)[MaxNameSize-1] == '\0');
1973  if (!success) mGridName[MaxNameSize-1] = '\0';
1974  return success; // returns true if input grid name is NOT longer than MaxNameSize characters
1975  }
1976  // Affine transformations based on double precision
1977  template<typename Vec3T>
1978  __hostdev__ Vec3T applyMap(const Vec3T& xyz) const { return mMap.applyMap(xyz); } // Pos: index -> world
1979  template<typename Vec3T>
1980  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const { return mMap.applyInverseMap(xyz); } // Pos: world -> index
1981  template<typename Vec3T>
1982  __hostdev__ Vec3T applyJacobian(const Vec3T& xyz) const { return mMap.applyJacobian(xyz); } // Dir: index -> world
1983  template<typename Vec3T>
1984  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return mMap.applyInverseJacobian(xyz); } // Dir: world -> index
1985  template<typename Vec3T>
1986  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return mMap.applyIJT(xyz); }
1987  // Affine transformations based on single precision
1988  template<typename Vec3T>
1989  __hostdev__ Vec3T applyMapF(const Vec3T& xyz) const { return mMap.applyMapF(xyz); } // Pos: index -> world
1990  template<typename Vec3T>
1991  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const { return mMap.applyInverseMapF(xyz); } // Pos: world -> index
1992  template<typename Vec3T>
1993  __hostdev__ Vec3T applyJacobianF(const Vec3T& xyz) const { return mMap.applyJacobianF(xyz); } // Dir: index -> world
1994  template<typename Vec3T>
1995  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return mMap.applyInverseJacobianF(xyz); } // Dir: world -> index
1996  template<typename Vec3T>
1997  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return mMap.applyIJTF(xyz); }
1998 
1999  // @brief Return a non-const void pointer to the tree
2000  __hostdev__ void* treePtr() { return this + 1; }// TreeData is always right after GridData
2001 
2002  // @brief Return a const void pointer to the tree
2003  __hostdev__ const void* treePtr() const { return this + 1; }// TreeData is always right after GridData
2004 
2005  /// @brief Return a non-const void pointer to the first node at @c LEVEL
2006  /// @tparam LEVEL Level of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
2007  template <uint32_t LEVEL>
2008  __hostdev__ const void* nodePtr() const
2009  {
2010  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
2011  const void *treeData = this + 1;// TreeData is always right after GridData
2012  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
2013  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
2014  }
2015 
2016  /// @brief Return a non-const void pointer to the first node at @c LEVEL
2017  /// @tparam LEVEL of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
2018  /// @warning If not nodes exist at @c LEVEL NULL is returned
2019  template <uint32_t LEVEL>
2021  {
2022  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
2023  void *treeData = this + 1;// TreeData is always right after GridData
2024  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
2025  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
2026  }
2027 
2028  /// @brief Return number of nodes at @c LEVEL
2029  /// @tparam Level of the node. LEVEL 0 means leaf node and LEVEL 2 means upper node
2030  template <uint32_t LEVEL>
2031  __hostdev__ uint32_t nodeCount() const
2032  {
2033  static_assert(LEVEL >= 0 && LEVEL < 3, "invalid LEVEL template parameter");
2034  return *util::PtrAdd<uint32_t>(this + 1, 4*(8 + LEVEL));// TreeData is always right after GridData
2035  }
2036 
2037  /// @brief Returns a const reference to the blindMetaData at the specified linear offset.
2038  ///
2039  /// @warning The linear offset is assumed to be in the valid range
2041  {
2042  NANOVDB_ASSERT(n < mBlindMetadataCount);
2043  return util::PtrAdd<GridBlindMetaData>(this, mBlindMetadataOffset) + n;
2044  }
2045 
2046  __hostdev__ const char* gridName() const
2047  {
2048  if (mFlags.isMaskOn(GridFlags::HasLongGridName)) {// search for first blind meta data that contains a name
2049  NANOVDB_ASSERT(mBlindMetadataCount > 0);
2050  for (uint32_t i = 0; i < mBlindMetadataCount; ++i) {
2051  const auto* metaData = this->blindMetaData(i);// EXTREMELY important to be a pointer
2052  if (metaData->mDataClass == GridBlindDataClass::GridName) {
2053  NANOVDB_ASSERT(metaData->mDataType == GridType::Unknown);
2054  return metaData->template getBlindData<const char>();
2055  }
2056  }
2057  NANOVDB_ASSERT(false); // should never hit this!
2058  }
2059  return mGridName;
2060  }
2061 
2062  /// @brief Return memory usage in bytes for this class only.
2063  __hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2064 
2065  /// @brief return AABB of active values in world space
2066  __hostdev__ const Vec3dBBox& worldBBox() const { return mWorldBBox; }
2067 
2068  /// @brief return AABB of active values in index space
2069  __hostdev__ const CoordBBox& indexBBox() const {return *(const CoordBBox*)(this->nodePtr<3>());}
2070 
2071  /// @brief return the root table has size
2072  __hostdev__ uint32_t rootTableSize() const
2073  {
2074  const void *root = this->nodePtr<3>();
2075  return root ? *util::PtrAdd<uint32_t>(root, sizeof(CoordBBox)) : 0u;
2076  }
2077 
2078  /// @brief test if the grid is empty, e.i the root table has size 0
2079  /// @return true if this grid contains not data whatsoever
2080  __hostdev__ bool isEmpty() const {return this->rootTableSize() == 0u;}
2081 
2082  /// @brief return true if RootData follows TreeData in memory without any extra padding
2083  /// @details TreeData is always following right after GridData, but the same might not be true for RootData
2084  __hostdev__ bool isRootConnected() const { return *(const uint64_t*)((const char*)(this + 1) + 24) == 64u;}
2085 }; // GridData
2086 
2087 // Forward declaration of accelerated random access class
2088 template<typename BuildT, int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1>
2090 
2091 template<typename BuildT>
2093 
2094 /// @brief Highest level of the data structure. Contains a tree and a world->index
2095 /// transform (that currently only supports uniform scaling and translation).
2096 ///
2097 /// @note This the API of this class to interface with client code
2098 template<typename TreeT>
2099 class Grid : public GridData
2100 {
2101 public:
2102  using TreeType = TreeT;
2103  using RootType = typename TreeT::RootType;
2105  using UpperNodeType = typename RootNodeType::ChildNodeType;
2106  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2107  using LeafNodeType = typename RootType::LeafNodeType;
2109  using ValueType = typename TreeT::ValueType;
2110  using BuildType = typename TreeT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2111  using CoordType = typename TreeT::CoordType;
2113 
2114  /// @brief Disallow constructions, copy and assignment
2115  ///
2116  /// @note Only a Serializer, defined elsewhere, can instantiate this class
2117  Grid(const Grid&) = delete;
2118  Grid& operator=(const Grid&) = delete;
2119  ~Grid() = delete;
2120 
2121  __hostdev__ Version version() const { return DataType::mVersion; }
2122 
2123  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2124 
2125  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2126 
2127  /// @brief Return memory usage in bytes for this class only.
2128  //__hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2129 
2130  /// @brief Return the memory footprint of the entire grid, i.e. including all nodes and blind data
2131  __hostdev__ uint64_t gridSize() const { return DataType::mGridSize; }
2132 
2133  /// @brief Return index of this grid in the buffer
2134  __hostdev__ uint32_t gridIndex() const { return DataType::mGridIndex; }
2135 
2136  /// @brief Return total number of grids in the buffer
2137  __hostdev__ uint32_t gridCount() const { return DataType::mGridCount; }
2138 
2139  /// @brief @brief Return the total number of values indexed by this IndexGrid
2140  ///
2141  /// @note This method is only defined for IndexGrid = NanoGrid<ValueIndex || ValueOnIndex || ValueIndexMask || ValueOnIndexMask>
2142  template<typename T = BuildType>
2143  __hostdev__ typename util::enable_if<BuildTraits<T>::is_index, const uint64_t&>::type
2144  valueCount() const { return DataType::mData1; }
2145 
2146  /// @brief @brief Return the total number of points indexed by this PointGrid
2147  ///
2148  /// @note This method is only defined for PointGrid = NanoGrid<Point>
2149  template<typename T = BuildType>
2150  __hostdev__ typename util::enable_if<util::is_same<T, Point>::value, const uint64_t&>::type
2151  pointCount() const { return DataType::mData1; }
2152 
2153  /// @brief Return a const reference to the tree
2154  __hostdev__ const TreeT& tree() const { return *reinterpret_cast<const TreeT*>(this->treePtr()); }
2155 
2156  /// @brief Return a non-const reference to the tree
2157  __hostdev__ TreeT& tree() { return *reinterpret_cast<TreeT*>(this->treePtr()); }
2158 
2159  /// @brief Return a new instance of a ReadAccessor used to access values in this grid
2160  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->tree().root()); }
2161 
2162  /// @brief Return a const reference to the size of a voxel in world units
2163  __hostdev__ const Vec3d& voxelSize() const { return DataType::mVoxelSize; }
2164 
2165  /// @brief Return a const reference to the Map for this grid
2166  __hostdev__ const Map& map() const { return DataType::mMap; }
2167 
2168  /// @brief world to index space transformation
2169  template<typename Vec3T>
2170  __hostdev__ Vec3T worldToIndex(const Vec3T& xyz) const { return this->applyInverseMap(xyz); }
2171 
2172  /// @brief index to world space transformation
2173  template<typename Vec3T>
2174  __hostdev__ Vec3T indexToWorld(const Vec3T& xyz) const { return this->applyMap(xyz); }
2175 
2176  /// @brief transformation from index space direction to world space direction
2177  /// @warning assumes dir to be normalized
2178  template<typename Vec3T>
2179  __hostdev__ Vec3T indexToWorldDir(const Vec3T& dir) const { return this->applyJacobian(dir); }
2180 
2181  /// @brief transformation from world space direction to index space direction
2182  /// @warning assumes dir to be normalized
2183  template<typename Vec3T>
2184  __hostdev__ Vec3T worldToIndexDir(const Vec3T& dir) const { return this->applyInverseJacobian(dir); }
2185 
2186  /// @brief transform the gradient from index space to world space.
2187  /// @details Applies the inverse jacobian transform map.
2188  template<typename Vec3T>
2189  __hostdev__ Vec3T indexToWorldGrad(const Vec3T& grad) const { return this->applyIJT(grad); }
2190 
2191  /// @brief world to index space transformation
2192  template<typename Vec3T>
2193  __hostdev__ Vec3T worldToIndexF(const Vec3T& xyz) const { return this->applyInverseMapF(xyz); }
2194 
2195  /// @brief index to world space transformation
2196  template<typename Vec3T>
2197  __hostdev__ Vec3T indexToWorldF(const Vec3T& xyz) const { return this->applyMapF(xyz); }
2198 
2199  /// @brief transformation from index space direction to world space direction
2200  /// @warning assumes dir to be normalized
2201  template<typename Vec3T>
2202  __hostdev__ Vec3T indexToWorldDirF(const Vec3T& dir) const { return this->applyJacobianF(dir); }
2203 
2204  /// @brief transformation from world space direction to index space direction
2205  /// @warning assumes dir to be normalized
2206  template<typename Vec3T>
2207  __hostdev__ Vec3T worldToIndexDirF(const Vec3T& dir) const { return this->applyInverseJacobianF(dir); }
2208 
2209  /// @brief Transforms the gradient from index space to world space.
2210  /// @details Applies the inverse jacobian transform map.
2211  template<typename Vec3T>
2212  __hostdev__ Vec3T indexToWorldGradF(const Vec3T& grad) const { return DataType::applyIJTF(grad); }
2213 
2214  /// @brief Computes a AABB of active values in world space
2215  //__hostdev__ const Vec3dBBox& worldBBox() const { return DataType::mWorldBBox; }
2216 
2217  /// @brief Computes a AABB of active values in index space
2218  ///
2219  /// @note This method is returning a floating point bounding box and not a CoordBBox. This makes
2220  /// it more useful for clipping rays.
2221  //__hostdev__ const BBox<CoordType>& indexBBox() const { return this->tree().bbox(); }
2222 
2223  /// @brief Return the total number of active voxels in this tree.
2224  __hostdev__ uint64_t activeVoxelCount() const { return this->tree().activeVoxelCount(); }
2225 
2226  /// @brief Methods related to the classification of this grid
2227  __hostdev__ bool isValid() const { return DataType::isValid(); }
2228  __hostdev__ const GridType& gridType() const { return DataType::mGridType; }
2229  __hostdev__ const GridClass& gridClass() const { return DataType::mGridClass; }
2230  __hostdev__ bool isLevelSet() const { return DataType::mGridClass == GridClass::LevelSet; }
2231  __hostdev__ bool isFogVolume() const { return DataType::mGridClass == GridClass::FogVolume; }
2232  __hostdev__ bool isStaggered() const { return DataType::mGridClass == GridClass::Staggered; }
2233  __hostdev__ bool isPointIndex() const { return DataType::mGridClass == GridClass::PointIndex; }
2234  __hostdev__ bool isGridIndex() const { return DataType::mGridClass == GridClass::IndexGrid; }
2235  __hostdev__ bool isPointData() const { return DataType::mGridClass == GridClass::PointData; }
2236  __hostdev__ bool isMask() const { return DataType::mGridClass == GridClass::Topology; }
2237  __hostdev__ bool isUnknown() const { return DataType::mGridClass == GridClass::Unknown; }
2238  __hostdev__ bool hasMinMax() const { return DataType::mFlags.isMaskOn(GridFlags::HasMinMax); }
2239  __hostdev__ bool hasBBox() const { return DataType::mFlags.isMaskOn(GridFlags::HasBBox); }
2240  __hostdev__ bool hasLongGridName() const { return DataType::mFlags.isMaskOn(GridFlags::HasLongGridName); }
2241  __hostdev__ bool hasAverage() const { return DataType::mFlags.isMaskOn(GridFlags::HasAverage); }
2242  __hostdev__ bool hasStdDeviation() const { return DataType::mFlags.isMaskOn(GridFlags::HasStdDeviation); }
2243  __hostdev__ bool isBreadthFirst() const { return DataType::mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
2244 
2245  /// @brief return true if the specified node type is laid out breadth-first in memory and has a fixed size.
2246  /// This allows for sequential access to the nodes.
2247  template<typename NodeT>
2248  __hostdev__ bool isSequential() const { return NodeT::FIXED_SIZE && this->isBreadthFirst(); }
2249 
2250  /// @brief return true if the specified node level is laid out breadth-first in memory and has a fixed size.
2251  /// This allows for sequential access to the nodes.
2252  template<int LEVEL>
2253  __hostdev__ bool isSequential() const { return NodeTrait<TreeT, LEVEL>::type::FIXED_SIZE && this->isBreadthFirst(); }
2254 
2255  /// @brief return true if nodes at all levels can safely be accessed with simple linear offsets
2256  __hostdev__ bool isSequential() const { return UpperNodeType::FIXED_SIZE && LowerNodeType::FIXED_SIZE && LeafNodeType::FIXED_SIZE && this->isBreadthFirst(); }
2257 
2258  /// @brief Return a c-string with the name of this grid
2259  __hostdev__ const char* gridName() const { return DataType::gridName(); }
2260 
2261  /// @brief Return a c-string with the name of this grid, truncated to 255 characters
2262  __hostdev__ const char* shortGridName() const { return DataType::mGridName; }
2263 
2264  /// @brief Return checksum of the grid buffer.
2265  __hostdev__ const Checksum& checksum() const { return DataType::mChecksum; }
2266 
2267  /// @brief Return true if this grid is empty, i.e. contains no values or nodes.
2268  //__hostdev__ bool isEmpty() const { return this->tree().isEmpty(); }
2269 
2270  /// @brief Return the count of blind-data encoded in this grid
2271  __hostdev__ uint32_t blindDataCount() const { return DataType::mBlindMetadataCount; }
2272 
2273  /// @brief Return the index of the first blind data with specified name if found, otherwise -1.
2274  __hostdev__ int findBlindData(const char* name) const;
2275 
2276  /// @brief Return the index of the first blind data with specified semantic if found, otherwise -1.
2277  __hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const;
2278 
2279  /// @brief Returns a const pointer to the blindData at the specified linear offset.
2280  ///
2281  /// @warning Pointer might be NULL and the linear offset is assumed to be in the valid range
2282  // this method is deprecated !!!!
2283  [[deprecated("Use Grid::getBlindData<T>() instead.")]]
2284  __hostdev__ const void* blindData(uint32_t n) const
2285  {
2286  printf("\nnanovdb::Grid::blindData is unsafe and hence deprecated! Please use nanovdb::Grid::getBlindData instead.\n\n");
2287  NANOVDB_ASSERT(n < DataType::mBlindMetadataCount);
2288  return this->blindMetaData(n).blindData();
2289  }
2290 
2291  template <typename BlindDataT>
2292  __hostdev__ const BlindDataT* getBlindData(uint32_t n) const
2293  {
2294  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2295  return this->blindMetaData(n).template getBlindData<BlindDataT>();// NULL if mismatching BlindDataT
2296  }
2297 
2298  template <typename BlindDataT>
2299  __hostdev__ BlindDataT* getBlindData(uint32_t n)
2300  {
2301  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2302  return const_cast<BlindDataT*>(this->blindMetaData(n).template getBlindData<BlindDataT>());// NULL if mismatching BlindDataT
2303  }
2304 
2305  __hostdev__ const GridBlindMetaData& blindMetaData(uint32_t n) const { return *DataType::blindMetaData(n); }
2306 
2307 private:
2308  static_assert(sizeof(GridData) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(GridData) is misaligned");
2309 }; // Class Grid
2310 
2311 template<typename TreeT>
2313 {
2314  for (uint32_t i = 0, n = this->blindDataCount(); i < n; ++i) {
2315  if (this->blindMetaData(i).mSemantic == semantic)
2316  return int(i);
2317  }
2318  return -1;
2319 }
2320 
2321 template<typename TreeT>
2322 __hostdev__ int Grid<TreeT>::findBlindData(const char* name) const
2323 {
2324  auto test = [&](int n) {
2325  const char* str = this->blindMetaData(n).mName;
2326  for (int i = 0; i < GridBlindMetaData::MaxNameSize; ++i) {
2327  if (name[i] != str[i])
2328  return false;
2329  if (name[i] == '\0' && str[i] == '\0')
2330  return true;
2331  }
2332  return true; // all len characters matched
2333  };
2334  for (int i = 0, n = this->blindDataCount(); i < n; ++i)
2335  if (test(i))
2336  return i;
2337  return -1;
2338 }
2339 
2340 // ----------------------------> Tree <--------------------------------------
2341 
2342 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) TreeData
2343 { // sizeof(TreeData) == 64B
2344  int64_t mNodeOffset[4];// 32B, byte offset from this tree to first leaf, lower, upper and root node. If mNodeCount[N]=0 => mNodeOffset[N]==mNodeOffset[N+1]
2345  uint32_t mNodeCount[3]; // 12B, total number of nodes of type: leaf, lower internal, upper internal
2346  uint32_t mTileCount[3]; // 12B, total number of active tile values at the lower internal, upper internal and root node levels
2347  uint64_t mVoxelCount; // 8B, total number of active voxels in the root and all its child nodes.
2348  // No padding since it's always 32B aligned
2349  TreeData& operator=(const TreeData&) = default;
2350  __hostdev__ void setRoot(const void* root) {
2351  NANOVDB_ASSERT(root);
2352  mNodeOffset[3] = util::PtrDiff(root, this);
2353  }
2354 
2355  /// @brief Get a non-const void pointer to the root node (never NULL)
2356  __hostdev__ void* getRoot() { return util::PtrAdd(this, mNodeOffset[3]); }
2357 
2358  /// @brief Get a const void pointer to the root node (never NULL)
2359  __hostdev__ const void* getRoot() const { return util::PtrAdd(this, mNodeOffset[3]); }
2360 
2361  template<typename NodeT>
2362  __hostdev__ void setFirstNode(const NodeT* node) {mNodeOffset[NodeT::LEVEL] = (node ? util::PtrDiff(node, this) : 0);}
2363 
2364  /// @brief Return true if the root is empty, i.e. has not child nodes or constant tiles
2365  __hostdev__ bool isEmpty() const {return mNodeOffset[3] ? *util::PtrAdd<uint32_t>(this, mNodeOffset[3] + sizeof(CoordBBox)) == 0 : true;}
2366 
2367  /// @brief Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2368  __hostdev__ CoordBBox bbox() const {return mNodeOffset[3] ? *util::PtrAdd<CoordBBox>(this, mNodeOffset[3]) : CoordBBox();}
2369 
2370  /// @brief return true if RootData is layout out immediately after TreeData in memory
2371  __hostdev__ bool isRootNext() const {return mNodeOffset[3] ? mNodeOffset[3] == sizeof(TreeData) : false; }
2372 };// TreeData
2373 
2374 // ----------------------------> GridTree <--------------------------------------
2375 
2376 /// @brief defines a tree type from a grid type while preserving constness
2377 template<typename GridT>
2378 struct GridTree
2379 {
2380  using Type = typename GridT::TreeType;
2381  using type = typename GridT::TreeType;
2382 };
2383 template<typename GridT>
2384 struct GridTree<const GridT>
2385 {
2386  using Type = const typename GridT::TreeType;
2387  using type = const typename GridT::TreeType;
2388 };
2389 
2390 // ----------------------------> Tree <--------------------------------------
2391 
2392 /// @brief VDB Tree, which is a thin wrapper around a RootNode.
2393 template<typename RootT>
2394 class Tree : public TreeData
2395 {
2396  static_assert(RootT::LEVEL == 3, "Tree depth is not supported");
2397  static_assert(RootT::ChildNodeType::LOG2DIM == 5, "Tree configuration is not supported");
2398  static_assert(RootT::ChildNodeType::ChildNodeType::LOG2DIM == 4, "Tree configuration is not supported");
2399  static_assert(RootT::LeafNodeType::LOG2DIM == 3, "Tree configuration is not supported");
2400 
2401 public:
2403  using RootType = RootT;
2404  using RootNodeType = RootT;
2405  using UpperNodeType = typename RootNodeType::ChildNodeType;
2406  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2407  using LeafNodeType = typename RootType::LeafNodeType;
2408  using ValueType = typename RootT::ValueType;
2409  using BuildType = typename RootT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2410  using CoordType = typename RootT::CoordType;
2412 
2413  using Node3 = RootT;
2414  using Node2 = typename RootT::ChildNodeType;
2415  using Node1 = typename Node2::ChildNodeType;
2417 
2418  /// @brief This class cannot be constructed or deleted
2419  Tree() = delete;
2420  Tree(const Tree&) = delete;
2421  Tree& operator=(const Tree&) = delete;
2422  ~Tree() = delete;
2423 
2424  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2425 
2426  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2427 
2428  /// @brief return memory usage in bytes for the class
2429  __hostdev__ static uint64_t memUsage() { return sizeof(DataType); }
2430 
2431  __hostdev__ RootT& root() {return *reinterpret_cast<RootT*>(DataType::getRoot());}
2432 
2433  __hostdev__ const RootT& root() const {return *reinterpret_cast<const RootT*>(DataType::getRoot());}
2434 
2435  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->root()); }
2436 
2437  /// @brief Return the value of the given voxel (regardless of state or location in the tree.)
2438  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->root().getValue(ijk); }
2439  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->root().getValue(CoordType(i, j, k)); }
2440 
2441  /// @brief Return the active state of the given voxel (regardless of state or location in the tree.)
2442  __hostdev__ bool isActive(const CoordType& ijk) const { return this->root().isActive(ijk); }
2443 
2444  /// @brief Return true if this tree is empty, i.e. contains no values or nodes
2445  //__hostdev__ bool isEmpty() const { return this->root().isEmpty(); }
2446 
2447  /// @brief Combines the previous two methods in a single call
2448  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->root().probeValue(ijk, v); }
2449 
2450  /// @brief Return a const reference to the background value.
2451  __hostdev__ const ValueType& background() const { return this->root().background(); }
2452 
2453  /// @brief Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree
2454  __hostdev__ void extrema(ValueType& min, ValueType& max) const;
2455 
2456  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2457  //__hostdev__ const BBox<CoordType>& bbox() const { return this->root().bbox(); }
2458 
2459  /// @brief Return the total number of active voxels in this tree.
2460  __hostdev__ uint64_t activeVoxelCount() const { return DataType::mVoxelCount; }
2461 
2462  /// @brief Return the total number of active tiles at the specified level of the tree.
2463  ///
2464  /// @details level = 1,2,3 corresponds to active tile count in lower internal nodes, upper
2465  /// internal nodes, and the root level. Note active values at the leaf level are
2466  /// referred to as active voxels (see activeVoxelCount defined above).
2467  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const
2468  {
2469  NANOVDB_ASSERT(level > 0 && level <= 3); // 1, 2, or 3
2470  return DataType::mTileCount[level - 1];
2471  }
2472 
2473  template<typename NodeT>
2474  __hostdev__ uint32_t nodeCount() const
2475  {
2476  static_assert(NodeT::LEVEL < 3, "Invalid NodeT");
2477  return DataType::mNodeCount[NodeT::LEVEL];
2478  }
2479 
2480  __hostdev__ uint32_t nodeCount(int level) const
2481  {
2482  NANOVDB_ASSERT(level < 3);
2483  return DataType::mNodeCount[level];
2484  }
2485 
2486  __hostdev__ uint32_t totalNodeCount() const
2487  {
2488  return DataType::mNodeCount[0] + DataType::mNodeCount[1] + DataType::mNodeCount[2];
2489  }
2490 
2491  /// @brief return a pointer to the first node of the specified type
2492  ///
2493  /// @warning Note it may return NULL if no nodes exist
2494  template<typename NodeT>
2496  {
2497  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2498  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2499  }
2500 
2501  /// @brief return a const pointer to the first node of the specified type
2502  ///
2503  /// @warning Note it may return NULL if no nodes exist
2504  template<typename NodeT>
2505  __hostdev__ const NodeT* getFirstNode() const
2506  {
2507  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2508  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2509  }
2510 
2511  /// @brief return a pointer to the first node at the specified level
2512  ///
2513  /// @warning Note it may return NULL if no nodes exist
2514  template<int LEVEL>
2516  {
2517  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2518  }
2519 
2520  /// @brief return a const pointer to the first node of the specified level
2521  ///
2522  /// @warning Note it may return NULL if no nodes exist
2523  template<int LEVEL>
2525  {
2526  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2527  }
2528 
2529  /// @brief Template specializations of getFirstNode
2530  __hostdev__ LeafNodeType* getFirstLeaf() { return this->getFirstNode<LeafNodeType>(); }
2531  __hostdev__ const LeafNodeType* getFirstLeaf() const { return this->getFirstNode<LeafNodeType>(); }
2532  __hostdev__ typename NodeTrait<RootT, 1>::type* getFirstLower() { return this->getFirstNode<1>(); }
2533  __hostdev__ const typename NodeTrait<RootT, 1>::type* getFirstLower() const { return this->getFirstNode<1>(); }
2534  __hostdev__ typename NodeTrait<RootT, 2>::type* getFirstUpper() { return this->getFirstNode<2>(); }
2535  __hostdev__ const typename NodeTrait<RootT, 2>::type* getFirstUpper() const { return this->getFirstNode<2>(); }
2536 
2537  template<typename OpT, typename... ArgsT>
2538  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
2539  {
2540  return this->root().template get<OpT>(ijk, args...);
2541  }
2542 
2543  template<typename OpT, typename... ArgsT>
2544  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
2545  {
2546  return this->root().template set<OpT>(ijk, args...);
2547  }
2548 
2549 private:
2550  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(TreeData) is misaligned");
2551 
2552 }; // Tree class
2553 
2554 template<typename RootT>
2556 {
2557  min = this->root().minimum();
2558  max = this->root().maximum();
2559 }
2560 
2561 // --------------------------> RootData <------------------------------------
2562 
2563 /// @brief Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode)
2564 ///
2565 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
2566 template<typename ChildT>
2567 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) RootData
2568 {
2569  using ValueT = typename ChildT::ValueType;
2570  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2571  using CoordT = typename ChildT::CoordType;
2572  using StatsT = typename ChildT::FloatType;
2573  static constexpr bool FIXED_SIZE = false;
2574 
2575  /// @brief Return a key based on the coordinates of a voxel
2576 #ifdef NANOVDB_USE_SINGLE_ROOT_KEY
2577  using KeyT = uint64_t;
2578  template<typename CoordType>
2579  __hostdev__ static KeyT CoordToKey(const CoordType& ijk)
2580  {
2581  static_assert(sizeof(CoordT) == sizeof(CoordType), "Mismatching sizeof");
2582  static_assert(32 - ChildT::TOTAL <= 21, "Cannot use 64 bit root keys");
2583  return (KeyT(uint32_t(ijk[2]) >> ChildT::TOTAL)) | // z is the lower 21 bits
2584  (KeyT(uint32_t(ijk[1]) >> ChildT::TOTAL) << 21) | // y is the middle 21 bits
2585  (KeyT(uint32_t(ijk[0]) >> ChildT::TOTAL) << 42); // x is the upper 21 bits
2586  }
2587  __hostdev__ static CoordT KeyToCoord(const KeyT& key)
2588  {
2589  static constexpr uint64_t MASK = (1u << 21) - 1; // used to mask out 21 lower bits
2590  return CoordT(((key >> 42) & MASK) << ChildT::TOTAL, // x are the upper 21 bits
2591  ((key >> 21) & MASK) << ChildT::TOTAL, // y are the middle 21 bits
2592  (key & MASK) << ChildT::TOTAL); // z are the lower 21 bits
2593  }
2594 #else
2595  using KeyT = CoordT;
2596  __hostdev__ static KeyT CoordToKey(const CoordT& ijk) { return ijk & ~ChildT::MASK; }
2597  __hostdev__ static CoordT KeyToCoord(const KeyT& key) { return key; }
2598 #endif
2599  math::BBox<CoordT> mBBox; // 24B. AABB of active values in index space.
2600  uint32_t mTableSize; // 4B. number of tiles and child pointers in the root node
2601 
2602  ValueT mBackground; // background value, i.e. value of any unset voxel
2603  ValueT mMinimum; // typically 4B, minimum of all the active values
2604  ValueT mMaximum; // typically 4B, maximum of all the active values
2605  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
2606  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
2607 
2608  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
2609  ///
2610  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
2611  __hostdev__ static constexpr uint32_t padding()
2612  {
2613  return sizeof(RootData) - (24 + 4 + 3 * sizeof(ValueT) + 2 * sizeof(StatsT));
2614  }
2615 
2616  struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) Tile
2617  {
2618  template<typename CoordType>
2619  __hostdev__ void setChild(const CoordType& k, const void* ptr, const RootData* data)
2620  {
2621  key = CoordToKey(k);
2622  state = false;
2623  child = util::PtrDiff(ptr, data);
2624  }
2625  template<typename CoordType, typename ValueType>
2626  __hostdev__ void setValue(const CoordType& k, bool s, const ValueType& v)
2627  {
2628  key = CoordToKey(k);
2629  state = s;
2630  value = v;
2631  child = 0;
2632  }
2633  __hostdev__ bool isChild() const { return child != 0; }
2634  __hostdev__ bool isValue() const { return child == 0; }
2635  __hostdev__ bool isActive() const { return child == 0 && state; }
2636  __hostdev__ CoordT origin() const { return KeyToCoord(key); }
2637  KeyT key; // NANOVDB_USE_SINGLE_ROOT_KEY ? 8B : 12B
2638  int64_t child; // 8B. signed byte offset from this node to the child node. 0 means it is a constant tile, so use value.
2639  uint32_t state; // 4B. state of tile value
2640  ValueT value; // value of tile (i.e. no child node)
2641  }; // Tile
2642 
2643  /// @brief Returns a pointer to the tile at the specified linear offset.
2644  ///
2645  /// @warning The linear offset is assumed to be in the valid range
2646  __hostdev__ const Tile* tile(uint32_t n) const
2647  {
2648  NANOVDB_ASSERT(n < mTableSize);
2649  return reinterpret_cast<const Tile*>(this + 1) + n;
2650  }
2651  __hostdev__ Tile* tile(uint32_t n)
2652  {
2653  NANOVDB_ASSERT(n < mTableSize);
2654  return reinterpret_cast<Tile*>(this + 1) + n;
2655  }
2656 
2657  template<typename DataT>
2658  class TileIter
2659  {
2660  protected:
2663  TileT *mBegin, *mPos, *mEnd;
2664 
2665  public:
2666  __hostdev__ TileIter() : mBegin(nullptr), mPos(nullptr), mEnd(nullptr) {}
2667  __hostdev__ TileIter(DataT* data, uint32_t pos = 0)
2668  : mBegin(reinterpret_cast<TileT*>(data + 1))// tiles reside right after the RootData
2669  , mPos(mBegin + pos)
2670  , mEnd(mBegin + data->mTableSize)
2671  {
2672  NANOVDB_ASSERT(data);
2673  NANOVDB_ASSERT(mBegin <= mPos);// pos > mTableSize is allowed
2674  NANOVDB_ASSERT(mBegin <= mEnd);// mTableSize = 0 is possible
2675  }
2676  __hostdev__ inline operator bool() const { return mPos < mEnd; }
2677  __hostdev__ inline auto pos() const {return mPos - mBegin; }
2679  {
2680  ++mPos;
2681  return *this;
2682  }
2683  __hostdev__ inline TileT& operator*() const
2684  {
2685  NANOVDB_ASSERT(mPos < mEnd);
2686  return *mPos;
2687  }
2688  __hostdev__ inline TileT* operator->() const
2689  {
2690  NANOVDB_ASSERT(mPos < mEnd);
2691  return mPos;
2692  }
2693  __hostdev__ inline DataT* data() const
2694  {
2695  NANOVDB_ASSERT(mBegin);
2696  return reinterpret_cast<DataT*>(mBegin) - 1;
2697  }
2698  __hostdev__ inline bool isChild() const
2699  {
2700  NANOVDB_ASSERT(mPos < mEnd);
2701  return mPos->child != 0;
2702  }
2703  __hostdev__ inline bool isValue() const
2704  {
2705  NANOVDB_ASSERT(mPos < mEnd);
2706  return mPos->child == 0;
2707  }
2708  __hostdev__ inline bool isValueOn() const
2709  {
2710  NANOVDB_ASSERT(mPos < mEnd);
2711  return mPos->child == 0 && mPos->state != 0;
2712  }
2713  __hostdev__ inline NodeT* child() const
2714  {
2715  NANOVDB_ASSERT(mPos < mEnd && mPos->child != 0);
2716  return util::PtrAdd<NodeT>(this->data(), mPos->child);// byte offset relative to RootData::this
2717  }
2718  __hostdev__ inline ValueT value() const
2719  {
2720  NANOVDB_ASSERT(mPos < mEnd && mPos->child == 0);
2721  return mPos->value;
2722  }
2723  };// TileIter
2724 
2727 
2730 
2732  {
2733  const auto key = CoordToKey(ijk);
2734  TileIterator iter(this);
2735  for(; iter; ++iter) if (iter->key == key) break;
2736  return iter;
2737  }
2738 
2739  __hostdev__ inline ConstTileIterator probe(const CoordT& ijk) const
2740  {
2741  const auto key = CoordToKey(ijk);
2742  ConstTileIterator iter(this);
2743  for(; iter; ++iter) if (iter->key == key) break;
2744  return iter;
2745  }
2746 
2747  __hostdev__ inline Tile* probeTile(const CoordT& ijk)
2748  {
2749  auto iter = this->probe(ijk);
2750  return iter ? iter.operator->() : nullptr;
2751  }
2752 
2753  __hostdev__ inline const Tile* probeTile(const CoordT& ijk) const
2754  {
2755  return const_cast<RootData*>(this)->probeTile(ijk);
2756  }
2757 
2758  __hostdev__ inline ChildT* probeChild(const CoordT& ijk)
2759  {
2760  auto iter = this->probe(ijk);
2761  return iter && iter.isChild() ? iter.child() : nullptr;
2762  }
2763 
2764  __hostdev__ inline const ChildT* probeChild(const CoordT& ijk) const
2765  {
2766  return const_cast<RootData*>(this)->probeChild(ijk);
2767  }
2768 
2769  /// @brief Returns a const reference to the child node in the specified tile.
2770  ///
2771  /// @warning A child node is assumed to exist in the specified tile
2772  __hostdev__ ChildT* getChild(const Tile* tile)
2773  {
2774  NANOVDB_ASSERT(tile->child);
2775  return util::PtrAdd<ChildT>(this, tile->child);
2776  }
2777  __hostdev__ const ChildT* getChild(const Tile* tile) const
2778  {
2779  NANOVDB_ASSERT(tile->child);
2780  return util::PtrAdd<ChildT>(this, tile->child);
2781  }
2782 
2783  __hostdev__ const ValueT& getMin() const { return mMinimum; }
2784  __hostdev__ const ValueT& getMax() const { return mMaximum; }
2785  __hostdev__ const StatsT& average() const { return mAverage; }
2786  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
2787 
2788  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
2789  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
2790  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
2791  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
2792 
2793  /// @brief This class cannot be constructed or deleted
2794  RootData() = delete;
2795  RootData(const RootData&) = delete;
2796  RootData& operator=(const RootData&) = delete;
2797  ~RootData() = delete;
2798 }; // RootData
2799 
2800 // --------------------------> RootNode <------------------------------------
2801 
2802 /// @brief Top-most node of the VDB tree structure.
2803 template<typename ChildT>
2804 class RootNode : public RootData<ChildT>
2805 {
2806 public:
2808  using ChildNodeType = ChildT;
2809  using RootType = RootNode<ChildT>; // this allows RootNode to behave like a Tree
2811  using UpperNodeType = ChildT;
2812  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2813  using LeafNodeType = typename ChildT::LeafNodeType;
2814  using ValueType = typename DataType::ValueT;
2815  using FloatType = typename DataType::StatsT;
2816  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2817 
2818  using CoordType = typename ChildT::CoordType;
2819  using BBoxType = math::BBox<CoordType>;
2821  using Tile = typename DataType::Tile;
2822  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
2823 
2824  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
2825 
2826  template<typename RootT>
2827  class BaseIter
2828  {
2829  protected:
2832  typename DataType::template TileIter<DataT> mTileIter;
2833  __hostdev__ BaseIter() : mTileIter() {}
2834  __hostdev__ BaseIter(DataT* data) : mTileIter(data){}
2835 
2836  public:
2837  __hostdev__ operator bool() const { return bool(mTileIter); }
2838  __hostdev__ uint32_t pos() const { return uint32_t(mTileIter.pos()); }
2839  __hostdev__ TileT* tile() const { return mTileIter.operator->(); }
2840  __hostdev__ CoordType getOrigin() const {return mTileIter->origin();}
2841  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
2842  }; // Member class BaseIter
2843 
2844  template<typename RootT>
2845  class ChildIter : public BaseIter<RootT>
2846  {
2847  static_assert(util::is_same<typename util::remove_const<RootT>::type, RootNode>::value, "Invalid RootT");
2848  using BaseT = BaseIter<RootT>;
2849  using NodeT = typename util::match_const<ChildT, RootT>::type;
2850  using BaseT::mTileIter;
2851 
2852  public:
2854  __hostdev__ ChildIter(RootT* parent) : BaseT(parent->data())
2855  {
2856  while (mTileIter && mTileIter.isValue()) ++mTileIter;
2857  }
2858  __hostdev__ NodeT& operator*() const {return *mTileIter.child();}
2859  __hostdev__ NodeT* operator->() const {return mTileIter.child();}
2861  {
2862  ++mTileIter;
2863  while (mTileIter && mTileIter.isValue()) ++mTileIter;
2864  return *this;
2865  }
2867  {
2868  auto tmp = *this;
2869  this->operator++();
2870  return tmp;
2871  }
2872  }; // Member class ChildIter
2873 
2876 
2879 
2880  template<typename RootT>
2881  class ValueIter : public BaseIter<RootT>
2882  {
2883  using BaseT = BaseIter<RootT>;
2884  using BaseT::mTileIter;
2885 
2886  public:
2888  __hostdev__ ValueIter(RootT* parent) : BaseT(parent->data())
2889  {
2890  while (mTileIter && mTileIter.isChild()) ++mTileIter;
2891  }
2892  __hostdev__ ValueType operator*() const {return mTileIter.value();}
2893  __hostdev__ bool isActive() const {return mTileIter.isValueOn();}
2895  {
2896  ++mTileIter;
2897  while (mTileIter && mTileIter.isChild()) ++mTileIter;
2898  return *this;
2899  }
2901  {
2902  auto tmp = *this;
2903  this->operator++();
2904  return tmp;
2905  }
2906  }; // Member class ValueIter
2907 
2910 
2913 
2914  template<typename RootT>
2915  class ValueOnIter : public BaseIter<RootT>
2916  {
2917  using BaseT = BaseIter<RootT>;
2918  using BaseT::mTileIter;
2919 
2920  public:
2922  __hostdev__ ValueOnIter(RootT* parent) : BaseT(parent->data())
2923  {
2924  while (mTileIter && !mTileIter.isValueOn()) ++mTileIter;
2925  }
2926  __hostdev__ ValueType operator*() const {return mTileIter.value();}
2928  {
2929  ++mTileIter;
2930  while (mTileIter && !mTileIter.isValueOn()) ++mTileIter;
2931  return *this;
2932  }
2934  {
2935  auto tmp = *this;
2936  this->operator++();
2937  return tmp;
2938  }
2939  }; // Member class ValueOnIter
2940 
2943 
2946 
2947  template<typename RootT>
2948  class DenseIter : public BaseIter<RootT>
2949  {
2950  using BaseT = BaseIter<RootT>;
2951  using NodeT = typename util::match_const<ChildT, RootT>::type;
2952  using BaseT::mTileIter;
2953 
2954  public:
2956  __hostdev__ DenseIter(RootT* parent) : BaseT(parent->data()){}
2957  __hostdev__ NodeT* probeChild(ValueType& value) const
2958  {
2959  if (mTileIter.isChild()) return mTileIter.child();
2960  value = mTileIter.value();
2961  return nullptr;
2962  }
2963  __hostdev__ bool isValueOn() const{return mTileIter.isValueOn();}
2965  {
2966  ++mTileIter;
2967  return *this;
2968  }
2970  {
2971  auto tmp = *this;
2972  ++mTileIter;
2973  return tmp;
2974  }
2975  }; // Member class DenseIter
2976 
2979 
2983 
2984  /// @brief This class cannot be constructed or deleted
2985  RootNode() = delete;
2986  RootNode(const RootNode&) = delete;
2987  RootNode& operator=(const RootNode&) = delete;
2988  ~RootNode() = delete;
2989 
2991 
2992  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2993 
2994  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2995 
2996  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2997  __hostdev__ const BBoxType& bbox() const { return DataType::mBBox; }
2998 
2999  /// @brief Return the total number of active voxels in the root and all its child nodes.
3000 
3001  /// @brief Return a const reference to the background value, i.e. the value associated with
3002  /// any coordinate location that has not been set explicitly.
3003  __hostdev__ const ValueType& background() const { return DataType::mBackground; }
3004 
3005  /// @brief Return the number of tiles encoded in this root node
3006  __hostdev__ const uint32_t& tileCount() const { return DataType::mTableSize; }
3007  __hostdev__ const uint32_t& getTableSize() const { return DataType::mTableSize; }
3008 
3009  /// @brief Return a const reference to the minimum active value encoded in this root node and any of its child nodes
3010  __hostdev__ const ValueType& minimum() const { return DataType::mMinimum; }
3011 
3012  /// @brief Return a const reference to the maximum active value encoded in this root node and any of its child nodes
3013  __hostdev__ const ValueType& maximum() const { return DataType::mMaximum; }
3014 
3015  /// @brief Return a const reference to the average of all the active values encoded in this root node and any of its child nodes
3016  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3017 
3018  /// @brief Return the variance of all the active values encoded in this root node and any of its child nodes
3019  __hostdev__ FloatType variance() const { return math::Pow2(DataType::mStdDevi); }
3020 
3021  /// @brief Return a const reference to the standard deviation of all the active values encoded in this root node and any of its child nodes
3022  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3023 
3024  /// @brief Return the expected memory footprint in bytes with the specified number of tiles
3025  __hostdev__ static uint64_t memUsage(uint32_t tableSize) { return sizeof(RootNode) + tableSize * sizeof(Tile); }
3026 
3027  /// @brief Return the actual memory footprint of this root node
3028  __hostdev__ uint64_t memUsage() const { return sizeof(RootNode) + DataType::mTableSize * sizeof(Tile); }
3029 
3030  /// @brief Return true if this RootNode is empty, i.e. contains no values or nodes
3031  __hostdev__ bool isEmpty() const { return DataType::mTableSize == uint32_t(0); }
3032 
3033  /// @brief Return the value of the given voxel
3034  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3035  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildType>>(CoordType(i, j, k)); }
3036  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3037  /// @brief return the state and updates the value of the specified voxel
3038  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3039  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3040 
3041  template<typename OpT, typename... ArgsT>
3042  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
3043  {
3044  if (const Tile* tile = this->probeTile(ijk)) {
3045  if constexpr(OpT::LEVEL < LEVEL) if (tile->isChild()) return this->getChild(tile)->template get<OpT>(ijk, args...);
3046  return OpT::get(*tile, args...);
3047  }
3048  return OpT::get(*this, args...);
3049  }
3050 
3051  template<typename OpT, typename... ArgsT>
3052  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args)
3053  {
3054  if (Tile* tile = DataType::probeTile(ijk)) {
3055  if constexpr(OpT::LEVEL < LEVEL) if (tile->isChild()) return this->getChild(tile)->template set<OpT>(ijk, args...);
3056  return OpT::set(*tile, args...);
3057  }
3058  return OpT::set(*this, args...);
3059  }
3060 
3061 private:
3062  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData) is misaligned");
3063  static_assert(sizeof(typename DataType::Tile) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData::Tile) is misaligned");
3064 
3065  template<typename, int, int, int>
3066  friend class ReadAccessor;
3067 
3068  template<typename>
3069  friend class Tree;
3070 
3071  template<typename RayT, typename AccT>
3072  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3073  {
3074  if (const Tile* tile = this->probeTile(ijk)) {
3075  if (tile->isChild()) {
3076  const auto* child = this->getChild(tile);
3077  acc.insert(ijk, child);
3078  return child->getDimAndCache(ijk, ray, acc);
3079  }
3080  return 1 << ChildT::TOTAL; //tile value
3081  }
3082  return ChildNodeType::dim(); // background
3083  }
3084 
3085  template<typename OpT, typename AccT, typename... ArgsT>
3086  __hostdev__ typename OpT::Type getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3087  {
3088  if (const Tile* tile = this->probeTile(ijk)) {
3089  if constexpr(OpT::LEVEL < LEVEL) {
3090  if (tile->isChild()) {
3091  const ChildT* child = this->getChild(tile);
3092  acc.insert(ijk, child);
3093  return child->template getAndCache<OpT>(ijk, acc, args...);
3094  }
3095  }
3096  return OpT::get(*tile, args...);
3097  }
3098  return OpT::get(*this, args...);
3099  }
3100 
3101  template<typename OpT, typename AccT, typename... ArgsT>
3102  __hostdev__ void setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3103  {
3104  if (Tile* tile = DataType::probeTile(ijk)) {
3105  if constexpr(OpT::LEVEL < LEVEL) {
3106  if (tile->isChild()) {
3107  ChildT* child = this->getChild(tile);
3108  acc.insert(ijk, child);
3109  return child->template setAndCache<OpT>(ijk, acc, args...);
3110  }
3111  }
3112  return OpT::set(*tile, args...);
3113  }
3114  return OpT::set(*this, args...);
3115  }
3116 
3117 }; // RootNode class
3118 
3119 // After the RootNode the memory layout is assumed to be the sorted Tiles
3120 
3121 // --------------------------> InternalNode <------------------------------------
3122 
3123 /// @brief Struct with all the member data of the InternalNode (useful during serialization of an openvdb InternalNode)
3124 ///
3125 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3126 template<typename ChildT, uint32_t LOG2DIM>
3127 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) InternalData
3128 {
3129  using ValueT = typename ChildT::ValueType;
3130  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3131  using StatsT = typename ChildT::FloatType;
3132  using CoordT = typename ChildT::CoordType;
3133  using MaskT = typename ChildT::template MaskType<LOG2DIM>;
3134  static constexpr bool FIXED_SIZE = true;
3135 
3136  union Tile
3137  {
3139  int64_t child; //signed 64 bit byte offset relative to this InternalData, i.e. child-pointer = Tile::child + this
3140  /// @brief This class cannot be constructed or deleted
3141  Tile() = delete;
3142  Tile(const Tile&) = delete;
3143  Tile& operator=(const Tile&) = delete;
3144  ~Tile() = delete;
3145  };
3146 
3147  math::BBox<CoordT> mBBox; // 24B. node bounding box. |
3148  uint64_t mFlags; // 8B. node flags. | 32B aligned
3149  MaskT mValueMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3150  MaskT mChildMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3151 
3152  ValueT mMinimum; // typically 4B
3153  ValueT mMaximum; // typically 4B
3154  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
3155  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3156  // possible padding, e.g. 28 byte padding when ValueType = bool
3157 
3158  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3159  ///
3160  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3161  __hostdev__ static constexpr uint32_t padding()
3162  {
3163  return sizeof(InternalData) - (24u + 8u + 2 * (sizeof(MaskT) + sizeof(ValueT) + sizeof(StatsT)) + (1u << (3 * LOG2DIM)) * (sizeof(ValueT) > 8u ? sizeof(ValueT) : 8u));
3164  }
3165  alignas(32) Tile mTable[1u << (3 * LOG2DIM)]; // sizeof(ValueT) x (16*16*16 or 32*32*32)
3166 
3167  __hostdev__ static uint64_t memUsage() { return sizeof(InternalData); }
3168 
3169  __hostdev__ void setChild(uint32_t n, const void* ptr)
3170  {
3171  NANOVDB_ASSERT(mChildMask.isOn(n));
3172  mTable[n].child = util::PtrDiff(ptr, this);
3173  }
3174 
3175  template<typename ValueT>
3176  __hostdev__ void setValue(uint32_t n, const ValueT& v)
3177  {
3178  NANOVDB_ASSERT(!mChildMask.isOn(n));
3179  mTable[n].value = v;
3180  }
3181 
3182  /// @brief Returns a pointer to the child node at the specifed linear offset.
3183  __hostdev__ ChildT* getChild(uint32_t n)
3184  {
3185  NANOVDB_ASSERT(mChildMask.isOn(n));
3186  return util::PtrAdd<ChildT>(this, mTable[n].child);
3187  }
3188  __hostdev__ const ChildT* getChild(uint32_t n) const
3189  {
3190  NANOVDB_ASSERT(mChildMask.isOn(n));
3191  return util::PtrAdd<ChildT>(this, mTable[n].child);
3192  }
3193 
3194  __hostdev__ ValueT getValue(uint32_t n) const
3195  {
3196  NANOVDB_ASSERT(mChildMask.isOff(n));
3197  return mTable[n].value;
3198  }
3199 
3200  __hostdev__ bool isActive(uint32_t n) const
3201  {
3202  NANOVDB_ASSERT(mChildMask.isOff(n));
3203  return mValueMask.isOn(n);
3204  }
3205 
3206  __hostdev__ bool isChild(uint32_t n) const { return mChildMask.isOn(n); }
3207 
3208  template<typename T>
3209  __hostdev__ void setOrigin(const T& ijk) { mBBox[0] = ijk; }
3210 
3211  __hostdev__ const ValueT& getMin() const { return mMinimum; }
3212  __hostdev__ const ValueT& getMax() const { return mMaximum; }
3213  __hostdev__ const StatsT& average() const { return mAverage; }
3214  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
3215 
3216 // GCC 11 (and possibly prior versions) has a regression that results in invalid
3217 // warnings when -Wstringop-overflow is turned on. For details, refer to
3218 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101854
3219 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3220 #pragma GCC diagnostic push
3221 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3222 #endif
3223  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
3224  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
3225  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
3226  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
3227 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3228 #pragma GCC diagnostic pop
3229 #endif
3230 
3231  /// @brief This class cannot be constructed or deleted
3232  InternalData() = delete;
3233  InternalData(const InternalData&) = delete;
3234  InternalData& operator=(const InternalData&) = delete;
3235  ~InternalData() = delete;
3236 }; // InternalData
3237 
3238 /// @brief Internal nodes of a VDB tree
3239 template<typename ChildT, uint32_t Log2Dim = ChildT::LOG2DIM + 1>
3240 class InternalNode : public InternalData<ChildT, Log2Dim>
3241 {
3242 public:
3244  using ValueType = typename DataType::ValueT;
3245  using FloatType = typename DataType::StatsT;
3246  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3247  using LeafNodeType = typename ChildT::LeafNodeType;
3248  using ChildNodeType = ChildT;
3249  using CoordType = typename ChildT::CoordType;
3250  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
3251  template<uint32_t LOG2>
3252  using MaskType = typename ChildT::template MaskType<LOG2>;
3253  template<bool On>
3254  using MaskIterT = typename Mask<Log2Dim>::template Iterator<On>;
3255 
3256  static constexpr uint32_t LOG2DIM = Log2Dim;
3257  static constexpr uint32_t TOTAL = LOG2DIM + ChildT::TOTAL; // dimension in index space
3258  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
3259  static constexpr uint32_t SIZE = 1u << (3 * LOG2DIM); // number of tile values (or child pointers)
3260  static constexpr uint32_t MASK = (1u << TOTAL) - 1u;
3261  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
3262  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
3263 
3264  /// @brief Visits child nodes of this node only
3265  template <typename ParentT>
3266  class ChildIter : public MaskIterT<true>
3267  {
3268  static_assert(util::is_same<typename util::remove_const<ParentT>::type, InternalNode>::value, "Invalid ParentT");
3269  using BaseT = MaskIterT<true>;
3270  using NodeT = typename util::match_const<ChildT, ParentT>::type;
3271  ParentT* mParent;
3272 
3273  public:
3275  : BaseT()
3276  , mParent(nullptr)
3277  {
3278  }
3279  __hostdev__ ChildIter(ParentT* parent)
3280  : BaseT(parent->mChildMask.beginOn())
3281  , mParent(parent)
3282  {
3283  }
3284  ChildIter& operator=(const ChildIter&) = default;
3285  __hostdev__ NodeT& operator*() const
3286  {
3287  NANOVDB_ASSERT(*this);
3288  return *mParent->getChild(BaseT::pos());
3289  }
3290  __hostdev__ NodeT* operator->() const
3291  {
3292  NANOVDB_ASSERT(*this);
3293  return mParent->getChild(BaseT::pos());
3294  }
3296  {
3297  NANOVDB_ASSERT(*this);
3298  return (*this)->origin();
3299  }
3300  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3301  }; // Member class ChildIter
3302 
3305 
3308 
3309  /// @brief Visits all tile values in this node, i.e. both inactive and active tiles
3310  class ValueIterator : public MaskIterT<false>
3311  {
3312  using BaseT = MaskIterT<false>;
3313  const InternalNode* mParent;
3314 
3315  public:
3317  : BaseT()
3318  , mParent(nullptr)
3319  {
3320  }
3322  : BaseT(parent->data()->mChildMask.beginOff())
3323  , mParent(parent)
3324  {
3325  }
3326  ValueIterator& operator=(const ValueIterator&) = default;
3328  {
3329  NANOVDB_ASSERT(*this);
3330  return mParent->data()->getValue(BaseT::pos());
3331  }
3333  {
3334  NANOVDB_ASSERT(*this);
3335  return mParent->offsetToGlobalCoord(BaseT::pos());
3336  }
3337  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3338  __hostdev__ bool isActive() const
3339  {
3340  NANOVDB_ASSERT(*this);
3341  return mParent->data()->isActive(BaseT::mPos);
3342  }
3343  }; // Member class ValueIterator
3344 
3347 
3348  /// @brief Visits active tile values of this node only
3349  class ValueOnIterator : public MaskIterT<true>
3350  {
3351  using BaseT = MaskIterT<true>;
3352  const InternalNode* mParent;
3353 
3354  public:
3356  : BaseT()
3357  , mParent(nullptr)
3358  {
3359  }
3361  : BaseT(parent->data()->mValueMask.beginOn())
3362  , mParent(parent)
3363  {
3364  }
3365  ValueOnIterator& operator=(const ValueOnIterator&) = default;
3367  {
3368  NANOVDB_ASSERT(*this);
3369  return mParent->data()->getValue(BaseT::pos());
3370  }
3372  {
3373  NANOVDB_ASSERT(*this);
3374  return mParent->offsetToGlobalCoord(BaseT::pos());
3375  }
3376  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3377  }; // Member class ValueOnIterator
3378 
3381 
3382  /// @brief Visits all tile values and child nodes of this node
3383  class DenseIterator : public Mask<Log2Dim>::DenseIterator
3384  {
3385  using BaseT = typename Mask<Log2Dim>::DenseIterator;
3386  const DataType* mParent;
3387 
3388  public:
3390  : BaseT()
3391  , mParent(nullptr)
3392  {
3393  }
3395  : BaseT(0)
3396  , mParent(parent->data())
3397  {
3398  }
3399  DenseIterator& operator=(const DenseIterator&) = default;
3400  __hostdev__ const ChildT* probeChild(ValueType& value) const
3401  {
3402  NANOVDB_ASSERT(mParent && bool(*this));
3403  const ChildT* child = nullptr;
3404  if (mParent->mChildMask.isOn(BaseT::pos())) {
3405  child = mParent->getChild(BaseT::pos());
3406  } else {
3407  value = mParent->getValue(BaseT::pos());
3408  }
3409  return child;
3410  }
3411  __hostdev__ bool isValueOn() const
3412  {
3413  NANOVDB_ASSERT(mParent && bool(*this));
3414  return mParent->isActive(BaseT::pos());
3415  }
3417  {
3418  NANOVDB_ASSERT(mParent && bool(*this));
3419  return mParent->offsetToGlobalCoord(BaseT::pos());
3420  }
3421  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3422  }; // Member class DenseIterator
3423 
3425  __hostdev__ DenseIterator cbeginChildAll() const { return DenseIterator(this); } // matches openvdb
3426 
3427  /// @brief This class cannot be constructed or deleted
3428  InternalNode() = delete;
3429  InternalNode(const InternalNode&) = delete;
3430  InternalNode& operator=(const InternalNode&) = delete;
3431  ~InternalNode() = delete;
3432 
3433  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
3434 
3435  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
3436 
3437  /// @brief Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32)
3438  __hostdev__ static uint32_t dim() { return 1u << TOTAL; }
3439 
3440  /// @brief Return memory usage in bytes for the class
3441  __hostdev__ static size_t memUsage() { return DataType::memUsage(); }
3442 
3443  /// @brief Return a const reference to the bit mask of active voxels in this internal node
3444  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
3445  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
3446 
3447  /// @brief Return a const reference to the bit mask of child nodes in this internal node
3448  __hostdev__ const MaskType<LOG2DIM>& childMask() const { return DataType::mChildMask; }
3449  __hostdev__ const MaskType<LOG2DIM>& getChildMask() const { return DataType::mChildMask; }
3450 
3451  /// @brief Return the origin in index space of this leaf node
3452  __hostdev__ CoordType origin() const { return DataType::mBBox.min() & ~MASK; }
3453 
3454  /// @brief Return a const reference to the minimum active value encoded in this internal node and any of its child nodes
3455  __hostdev__ const ValueType& minimum() const { return this->getMin(); }
3456 
3457  /// @brief Return a const reference to the maximum active value encoded in this internal node and any of its child nodes
3458  __hostdev__ const ValueType& maximum() const { return this->getMax(); }
3459 
3460  /// @brief Return a const reference to the average of all the active values encoded in this internal node and any of its child nodes
3461  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3462 
3463  /// @brief Return the variance of all the active values encoded in this internal node and any of its child nodes
3464  __hostdev__ FloatType variance() const { return DataType::mStdDevi * DataType::mStdDevi; }
3465 
3466  /// @brief Return a const reference to the standard deviation of all the active values encoded in this internal node and any of its child nodes
3467  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3468 
3469  /// @brief Return a const reference to the bounding box in index space of active values in this internal node and any of its child nodes
3470  __hostdev__ const math::BBox<CoordType>& bbox() const { return DataType::mBBox; }
3471 
3472  /// @brief If the first entry in this node's table is a tile, return the tile's value.
3473  /// Otherwise, return the result of calling getFirstValue() on the child.
3475  {
3476  return DataType::mChildMask.isOn(0) ? this->getChild(0)->getFirstValue() : DataType::getValue(0);
3477  }
3478 
3479  /// @brief If the last entry in this node's table is a tile, return the tile's value.
3480  /// Otherwise, return the result of calling getLastValue() on the child.
3482  {
3483  return DataType::mChildMask.isOn(SIZE - 1) ? this->getChild(SIZE - 1)->getLastValue() : DataType::getValue(SIZE - 1);
3484  }
3485 
3486  /// @brief Return the value of the given voxel
3487  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3488  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3489  /// @brief return the state and updates the value of the specified voxel
3490  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3491  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3492 
3494  {
3495  const uint32_t n = CoordToOffset(ijk);
3496  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3497  }
3499  {
3500  const uint32_t n = CoordToOffset(ijk);
3501  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3502  }
3503 
3504  /// @brief Return the linear offset corresponding to the given coordinate
3505  __hostdev__ static uint32_t CoordToOffset(const CoordType& ijk)
3506  {
3507  return (((ijk[0] & MASK) >> ChildT::TOTAL) << (2 * LOG2DIM)) | // note, we're using bitwise OR instead of +
3508  (((ijk[1] & MASK) >> ChildT::TOTAL) << (LOG2DIM)) |
3509  ((ijk[2] & MASK) >> ChildT::TOTAL);
3510  }
3511 
3512  /// @return the local coordinate of the n'th tile or child node
3513  __hostdev__ static Coord OffsetToLocalCoord(uint32_t n)
3514  {
3515  NANOVDB_ASSERT(n < SIZE);
3516  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
3517  return Coord(n >> 2 * LOG2DIM, m >> LOG2DIM, m & ((1 << LOG2DIM) - 1));
3518  }
3519 
3520  /// @brief modifies local coordinates to global coordinates of a tile or child node
3521  __hostdev__ void localToGlobalCoord(Coord& ijk) const
3522  {
3523  ijk <<= ChildT::TOTAL;
3524  ijk += this->origin();
3525  }
3526 
3527  __hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
3528  {
3529  Coord ijk = InternalNode::OffsetToLocalCoord(n);
3530  this->localToGlobalCoord(ijk);
3531  return ijk;
3532  }
3533 
3534  /// @brief Return true if this node or any of its child nodes contain active values
3535  __hostdev__ bool isActive() const { return DataType::mFlags & uint32_t(2); }
3536 
3537  template<typename OpT, typename... ArgsT>
3538  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
3539  {
3540  const uint32_t n = CoordToOffset(ijk);
3541  if constexpr(OpT::LEVEL < LEVEL) if (this->isChild(n)) return this->getChild(n)->template get<OpT>(ijk, args...);
3542  return OpT::get(*this, n, args...);
3543  }
3544 
3545  template<typename OpT, typename... ArgsT>
3546  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args)
3547  {
3548  const uint32_t n = CoordToOffset(ijk);
3549  if constexpr(OpT::LEVEL < LEVEL) if (this->isChild(n)) return this->getChild(n)->template set<OpT>(ijk, args...);
3550  return OpT::set(*this, n, args...);
3551  }
3552 
3553 private:
3554  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(InternalData) is misaligned");
3555 
3556  template<typename, int, int, int>
3557  friend class ReadAccessor;
3558 
3559  template<typename>
3560  friend class RootNode;
3561  template<typename, uint32_t>
3562  friend class InternalNode;
3563 
3564  template<typename RayT, typename AccT>
3565  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3566  {
3567  if (DataType::mFlags & uint32_t(1u))
3568  return this->dim(); // skip this node if the 1st bit is set
3569  //if (!ray.intersects( this->bbox() )) return 1<<TOTAL;
3570 
3571  const uint32_t n = CoordToOffset(ijk);
3572  if (DataType::mChildMask.isOn(n)) {
3573  const ChildT* child = this->getChild(n);
3574  acc.insert(ijk, child);
3575  return child->getDimAndCache(ijk, ray, acc);
3576  }
3577  return ChildNodeType::dim(); // tile value
3578  }
3579 
3580  template<typename OpT, typename AccT, typename... ArgsT>
3581  __hostdev__ typename OpT::Type getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3582  {
3583  const uint32_t n = CoordToOffset(ijk);
3584  if constexpr(OpT::LEVEL < LEVEL) {
3585  if (this->isChild(n)) {
3586  const ChildT* child = this->getChild(n);
3587  acc.insert(ijk, child);
3588  return child->template getAndCache<OpT>(ijk, acc, args...);
3589  }
3590  }
3591  return OpT::get(*this, n, args...);
3592  }
3593 
3594  template<typename OpT, typename AccT, typename... ArgsT>
3595  __hostdev__ void setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3596  {
3597  const uint32_t n = CoordToOffset(ijk);
3598  if constexpr(OpT::LEVEL < LEVEL) {
3599  if (this->isChild(n)) {
3600  ChildT* child = this->getChild(n);
3601  acc.insert(ijk, child);
3602  return child->template setAndCache<OpT>(ijk, acc, args...);
3603  }
3604  }
3605  return OpT::set(*this, n, args...);
3606  }
3607 
3608 }; // InternalNode class
3609 
3610 // --------------------------> LeafData<T> <------------------------------------
3611 
3612 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3613 ///
3614 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3615 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3616 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData
3617 {
3618  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3619  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3620  using ValueType = ValueT;
3621  using BuildType = ValueT;
3623  using ArrayType = ValueT; // type used for the internal mValue array
3624  static constexpr bool FIXED_SIZE = true;
3625 
3626  CoordT mBBoxMin; // 12B.
3627  uint8_t mBBoxDif[3]; // 3B.
3628  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3629  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3630 
3631  ValueType mMinimum; // typically 4B
3632  ValueType mMaximum; // typically 4B
3633  FloatType mAverage; // typically 4B, average of all the active values in this node and its child nodes
3634  FloatType mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3635  alignas(32) ValueType mValues[1u << 3 * LOG2DIM];
3636 
3637  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3638  ///
3639  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3640  __hostdev__ static constexpr uint32_t padding()
3641  {
3642  return sizeof(LeafData) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * (sizeof(ValueT) + sizeof(FloatType)) + (1u << (3 * LOG2DIM)) * sizeof(ValueT));
3643  }
3644  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3645 
3646  __hostdev__ static bool hasStats() { return true; }
3647 
3648  __hostdev__ ValueType getValue(uint32_t i) const { return mValues[i]; }
3649  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& value) { mValues[offset] = value; }
3650  __hostdev__ void setValue(uint32_t offset, const ValueType& value)
3651  {
3652  mValueMask.setOn(offset);
3653  mValues[offset] = value;
3654  }
3655  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3656 
3657  __hostdev__ ValueType getMin() const { return mMinimum; }
3658  __hostdev__ ValueType getMax() const { return mMaximum; }
3659  __hostdev__ FloatType getAvg() const { return mAverage; }
3660  __hostdev__ FloatType getDev() const { return mStdDevi; }
3661 
3662 // GCC 11 (and possibly prior versions) has a regression that results in invalid
3663 // warnings when -Wstringop-overflow is turned on. For details, refer to
3664 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101854
3665 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3666 #pragma GCC diagnostic push
3667 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3668 #endif
3669  __hostdev__ void setMin(const ValueType& v) { mMinimum = v; }
3670  __hostdev__ void setMax(const ValueType& v) { mMaximum = v; }
3671  __hostdev__ void setAvg(const FloatType& v) { mAverage = v; }
3672  __hostdev__ void setDev(const FloatType& v) { mStdDevi = v; }
3673 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3674 #pragma GCC diagnostic pop
3675 #endif
3676 
3677  template<typename T>
3678  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3679 
3680  __hostdev__ void fill(const ValueType& v)
3681  {
3682  for (auto *p = mValues, *q = p + 512; p != q; ++p)
3683  *p = v;
3684  }
3685 
3686  /// @brief This class cannot be constructed or deleted
3687  LeafData() = delete;
3688  LeafData(const LeafData&) = delete;
3689  LeafData& operator=(const LeafData&) = delete;
3690  ~LeafData() = delete;
3691 }; // LeafData<ValueT>
3692 
3693 // --------------------------> LeafFnBase <------------------------------------
3694 
3695 /// @brief Base-class for quantized float leaf nodes
3696 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3697 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafFnBase
3698 {
3699  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3700  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3701  using ValueType = float;
3702  using FloatType = float;
3703 
3704  CoordT mBBoxMin; // 12B.
3705  uint8_t mBBoxDif[3]; // 3B.
3706  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3707  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3708 
3709  float mMinimum; // 4B - minimum of ALL values in this node
3710  float mQuantum; // = (max - min)/15 4B
3711  uint16_t mMin, mMax, mAvg, mDev; // quantized representations of statistics of active values
3712  // no padding since it's always 32B aligned
3713  __hostdev__ static uint64_t memUsage() { return sizeof(LeafFnBase); }
3714 
3715  __hostdev__ static bool hasStats() { return true; }
3716 
3717  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3718  ///
3719  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3720  __hostdev__ static constexpr uint32_t padding()
3721  {
3722  return sizeof(LeafFnBase) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * 4 + 4 * 2);
3723  }
3724  __hostdev__ void init(float min, float max, uint8_t bitWidth)
3725  {
3726  mMinimum = min;
3727  mQuantum = (max - min) / float((1 << bitWidth) - 1);
3728  }
3729 
3730  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3731 
3732  /// @brief return the quantized minimum of the active values in this node
3733  __hostdev__ float getMin() const { return mMin * mQuantum + mMinimum; }
3734 
3735  /// @brief return the quantized maximum of the active values in this node
3736  __hostdev__ float getMax() const { return mMax * mQuantum + mMinimum; }
3737 
3738  /// @brief return the quantized average of the active values in this node
3739  __hostdev__ float getAvg() const { return mAvg * mQuantum + mMinimum; }
3740  /// @brief return the quantized standard deviation of the active values in this node
3741 
3742  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3743  __hostdev__ float getDev() const { return mDev * mQuantum; }
3744 
3745  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3746  __hostdev__ void setMin(float min) { mMin = uint16_t((min - mMinimum) / mQuantum + 0.5f); }
3747 
3748  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3749  __hostdev__ void setMax(float max) { mMax = uint16_t((max - mMinimum) / mQuantum + 0.5f); }
3750 
3751  /// @note min <= avg <= max or 0 <= (avg-min)/(min-max) <= 1
3752  __hostdev__ void setAvg(float avg) { mAvg = uint16_t((avg - mMinimum) / mQuantum + 0.5f); }
3753 
3754  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3755  __hostdev__ void setDev(float dev) { mDev = uint16_t(dev / mQuantum + 0.5f); }
3756 
3757  template<typename T>
3758  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3759 }; // LeafFnBase
3760 
3761 // --------------------------> LeafData<Fp4> <------------------------------------
3762 
3763 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3764 ///
3765 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3766 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3767 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp4, CoordT, MaskT, LOG2DIM>
3768  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3769 {
3771  using BuildType = Fp4;
3772  using ArrayType = uint8_t; // type used for the internal mValue array
3773  static constexpr bool FIXED_SIZE = true;
3774  alignas(32) uint8_t mCode[1u << (3 * LOG2DIM - 1)]; // LeafFnBase is 32B aligned and so is mCode
3775 
3776  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3777  __hostdev__ static constexpr uint32_t padding()
3778  {
3779  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3780  return sizeof(LeafData) - sizeof(BaseT) - (1u << (3 * LOG2DIM - 1));
3781  }
3782 
3783  __hostdev__ static constexpr uint8_t bitWidth() { return 4u; }
3784  __hostdev__ float getValue(uint32_t i) const
3785  {
3786 #if 0
3787  const uint8_t c = mCode[i>>1];
3788  return ( (i&1) ? c >> 4 : c & uint8_t(15) )*BaseT::mQuantum + BaseT::mMinimum;
3789 #else
3790  return ((mCode[i >> 1] >> ((i & 1) << 2)) & uint8_t(15)) * BaseT::mQuantum + BaseT::mMinimum;
3791 #endif
3792  }
3793 
3794  /// @brief This class cannot be constructed or deleted
3795  LeafData() = delete;
3796  LeafData(const LeafData&) = delete;
3797  LeafData& operator=(const LeafData&) = delete;
3798  ~LeafData() = delete;
3799 }; // LeafData<Fp4>
3800 
3801 // --------------------------> LeafBase<Fp8> <------------------------------------
3802 
3803 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3804 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp8, CoordT, MaskT, LOG2DIM>
3805  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3806 {
3808  using BuildType = Fp8;
3809  using ArrayType = uint8_t; // type used for the internal mValue array
3810  static constexpr bool FIXED_SIZE = true;
3811  alignas(32) uint8_t mCode[1u << 3 * LOG2DIM];
3812  __hostdev__ static constexpr int64_t memUsage() { return sizeof(LeafData); }
3813  __hostdev__ static constexpr uint32_t padding()
3814  {
3815  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3816  return sizeof(LeafData) - sizeof(BaseT) - (1u << 3 * LOG2DIM);
3817  }
3818 
3819  __hostdev__ static constexpr uint8_t bitWidth() { return 8u; }
3820  __hostdev__ float getValue(uint32_t i) const
3821  {
3822  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/255 + min
3823  }
3824  /// @brief This class cannot be constructed or deleted
3825  LeafData() = delete;
3826  LeafData(const LeafData&) = delete;
3827  LeafData& operator=(const LeafData&) = delete;
3828  ~LeafData() = delete;
3829 }; // LeafData<Fp8>
3830 
3831 // --------------------------> LeafData<Fp16> <------------------------------------
3832 
3833 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3834 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp16, CoordT, MaskT, LOG2DIM>
3835  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3836 {
3838  using BuildType = Fp16;
3839  using ArrayType = uint16_t; // type used for the internal mValue array
3840  static constexpr bool FIXED_SIZE = true;
3841  alignas(32) uint16_t mCode[1u << 3 * LOG2DIM];
3842 
3843  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3844  __hostdev__ static constexpr uint32_t padding()
3845  {
3846  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3847  return sizeof(LeafData) - sizeof(BaseT) - 2 * (1u << 3 * LOG2DIM);
3848  }
3849 
3850  __hostdev__ static constexpr uint8_t bitWidth() { return 16u; }
3851  __hostdev__ float getValue(uint32_t i) const
3852  {
3853  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/65535 + min
3854  }
3855 
3856  /// @brief This class cannot be constructed or deleted
3857  LeafData() = delete;
3858  LeafData(const LeafData&) = delete;
3859  LeafData& operator=(const LeafData&) = delete;
3860  ~LeafData() = delete;
3861 }; // LeafData<Fp16>
3862 
3863 // --------------------------> LeafData<FpN> <------------------------------------
3864 
3865 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3866 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<FpN, CoordT, MaskT, LOG2DIM>
3867  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3868 { // this class has no additional data members, however every instance is immediately followed by
3869  // bitWidth*64 bytes. Since its base class is 32B aligned so are the bitWidth*64 bytes
3871  using BuildType = FpN;
3872  static constexpr bool FIXED_SIZE = false;
3873  __hostdev__ static constexpr uint32_t padding()
3874  {
3875  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3876  return 0;
3877  }
3878 
3879  __hostdev__ uint8_t bitWidth() const { return 1 << (BaseT::mFlags >> 5); } // 4,8,16,32 = 2^(2,3,4,5)
3880  __hostdev__ size_t memUsage() const { return sizeof(*this) + this->bitWidth() * 64; }
3881  __hostdev__ static size_t memUsage(uint32_t bitWidth) { return 96u + bitWidth * 64; }
3882  __hostdev__ float getValue(uint32_t i) const
3883  {
3884 #ifdef NANOVDB_FPN_BRANCHLESS // faster
3885  const int b = BaseT::mFlags >> 5; // b = 0, 1, 2, 3, 4 corresponding to 1, 2, 4, 8, 16 bits
3886 #if 0 // use LUT
3887  uint16_t code = reinterpret_cast<const uint16_t*>(this + 1)[i >> (4 - b)];
3888  const static uint8_t shift[5] = {15, 7, 3, 1, 0};
3889  const static uint16_t mask[5] = {1, 3, 15, 255, 65535};
3890  code >>= (i & shift[b]) << b;
3891  code &= mask[b];
3892 #else // no LUT
3893  uint32_t code = reinterpret_cast<const uint32_t*>(this + 1)[i >> (5 - b)];
3894  code >>= (i & ((32 >> b) - 1)) << b;
3895  code &= (1 << (1 << b)) - 1;
3896 #endif
3897 #else // use branched version (slow)
3898  float code;
3899  auto* values = reinterpret_cast<const uint8_t*>(this + 1);
3900  switch (BaseT::mFlags >> 5) {
3901  case 0u: // 1 bit float
3902  code = float((values[i >> 3] >> (i & 7)) & uint8_t(1));
3903  break;
3904  case 1u: // 2 bits float
3905  code = float((values[i >> 2] >> ((i & 3) << 1)) & uint8_t(3));
3906  break;
3907  case 2u: // 4 bits float
3908  code = float((values[i >> 1] >> ((i & 1) << 2)) & uint8_t(15));
3909  break;
3910  case 3u: // 8 bits float
3911  code = float(values[i]);
3912  break;
3913  default: // 16 bits float
3914  code = float(reinterpret_cast<const uint16_t*>(values)[i]);
3915  }
3916 #endif
3917  return float(code) * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/UNITS + min
3918  }
3919 
3920  /// @brief This class cannot be constructed or deleted
3921  LeafData() = delete;
3922  LeafData(const LeafData&) = delete;
3923  LeafData& operator=(const LeafData&) = delete;
3924  ~LeafData() = delete;
3925 }; // LeafData<FpN>
3926 
3927 // --------------------------> LeafData<bool> <------------------------------------
3928 
3929 // Partial template specialization of LeafData with bool
3930 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3931 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<bool, CoordT, MaskT, LOG2DIM>
3932 {
3933  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3934  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3935  using ValueType = bool;
3936  using BuildType = bool;
3937  using FloatType = bool; // dummy value type
3938  using ArrayType = MaskT<LOG2DIM>; // type used for the internal mValue array
3939  static constexpr bool FIXED_SIZE = true;
3940 
3941  CoordT mBBoxMin; // 12B.
3942  uint8_t mBBoxDif[3]; // 3B.
3943  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3944  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3945  MaskT<LOG2DIM> mValues; // LOG2DIM(3): 64B.
3946  uint64_t mPadding[2]; // 16B padding to 32B alignment
3947 
3948  __hostdev__ static constexpr uint32_t padding() { return sizeof(LeafData) - 12u - 3u - 1u - 2 * sizeof(MaskT<LOG2DIM>) - 16u; }
3949  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3950  __hostdev__ static bool hasStats() { return false; }
3951  __hostdev__ bool getValue(uint32_t i) const { return mValues.isOn(i); }
3952  __hostdev__ bool getMin() const { return false; } // dummy
3953  __hostdev__ bool getMax() const { return false; } // dummy
3954  __hostdev__ bool getAvg() const { return false; } // dummy
3955  __hostdev__ bool getDev() const { return false; } // dummy
3956  __hostdev__ void setValue(uint32_t offset, bool v)
3957  {
3958  mValueMask.setOn(offset);
3959  mValues.set(offset, v);
3960  }
3961  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3962  __hostdev__ void setMin(const bool&) {} // no-op
3963  __hostdev__ void setMax(const bool&) {} // no-op
3964  __hostdev__ void setAvg(const bool&) {} // no-op
3965  __hostdev__ void setDev(const bool&) {} // no-op
3966 
3967  template<typename T>
3968  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3969 
3970  /// @brief This class cannot be constructed or deleted
3971  LeafData() = delete;
3972  LeafData(const LeafData&) = delete;
3973  LeafData& operator=(const LeafData&) = delete;
3974  ~LeafData() = delete;
3975 }; // LeafData<bool>
3976 
3977 // --------------------------> LeafData<ValueMask> <------------------------------------
3978 
3979 // Partial template specialization of LeafData with ValueMask
3980 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3981 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueMask, CoordT, MaskT, LOG2DIM>
3982 {
3983  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3984  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3985  using ValueType = bool;
3987  using FloatType = bool; // dummy value type
3988  using ArrayType = void; // type used for the internal mValue array - void means missing
3989  static constexpr bool FIXED_SIZE = true;
3990 
3991  CoordT mBBoxMin; // 12B.
3992  uint8_t mBBoxDif[3]; // 3B.
3993  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3994  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3995  uint64_t mPadding[2]; // 16B padding to 32B alignment
3996 
3997  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3998  __hostdev__ static bool hasStats() { return false; }
3999  __hostdev__ static constexpr uint32_t padding()
4000  {
4001  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4002  }
4003 
4004  __hostdev__ bool getValue(uint32_t i) const { return mValueMask.isOn(i); }
4005  __hostdev__ bool getMin() const { return false; } // dummy
4006  __hostdev__ bool getMax() const { return false; } // dummy
4007  __hostdev__ bool getAvg() const { return false; } // dummy
4008  __hostdev__ bool getDev() const { return false; } // dummy
4009  __hostdev__ void setValue(uint32_t offset, bool) { mValueMask.setOn(offset); }
4010  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4011  __hostdev__ void setMin(const ValueType&) {} // no-op
4012  __hostdev__ void setMax(const ValueType&) {} // no-op
4013  __hostdev__ void setAvg(const FloatType&) {} // no-op
4014  __hostdev__ void setDev(const FloatType&) {} // no-op
4015 
4016  template<typename T>
4017  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4018 
4019  /// @brief This class cannot be constructed or deleted
4020  LeafData() = delete;
4021  LeafData(const LeafData&) = delete;
4022  LeafData& operator=(const LeafData&) = delete;
4023  ~LeafData() = delete;
4024 }; // LeafData<ValueMask>
4025 
4026 // --------------------------> LeafIndexBase <------------------------------------
4027 
4028 // Partial template specialization of LeafData with ValueIndex
4029 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4030 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafIndexBase
4031 {
4032  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4033  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4034  using ValueType = uint64_t;
4035  using FloatType = uint64_t;
4036  using ArrayType = void; // type used for the internal mValue array - void means missing
4037  static constexpr bool FIXED_SIZE = true;
4038 
4039  CoordT mBBoxMin; // 12B.
4040  uint8_t mBBoxDif[3]; // 3B.
4041  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4042  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4043  uint64_t mOffset, mPrefixSum; // 8B offset to first value in this leaf node and 9-bit prefix sum
4044  __hostdev__ static constexpr uint32_t padding()
4045  {
4046  return sizeof(LeafIndexBase) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4047  }
4048  __hostdev__ static uint64_t memUsage() { return sizeof(LeafIndexBase); }
4049  __hostdev__ bool hasStats() const { return mFlags & (uint8_t(1) << 4); }
4050  // return the offset to the first value indexed by this leaf node
4051  __hostdev__ const uint64_t& firstOffset() const { return mOffset; }
4052  __hostdev__ void setMin(const ValueType&) {} // no-op
4053  __hostdev__ void setMax(const ValueType&) {} // no-op
4054  __hostdev__ void setAvg(const FloatType&) {} // no-op
4055  __hostdev__ void setDev(const FloatType&) {} // no-op
4056  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4057  template<typename T>
4058  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4059 
4060 protected:
4061  /// @brief This class should be used as an abstract class and only constructed or deleted via child classes
4062  LeafIndexBase() = default;
4063  LeafIndexBase(const LeafIndexBase&) = default;
4064  LeafIndexBase& operator=(const LeafIndexBase&) = default;
4065  ~LeafIndexBase() = default;
4066 }; // LeafIndexBase
4067 
4068 // --------------------------> LeafData<ValueIndex> <------------------------------------
4069 
4070 // Partial template specialization of LeafData with ValueIndex
4071 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4072 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4073  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4074 {
4077  // return the total number of values indexed by this leaf node, excluding the optional 4 stats
4078  __hostdev__ static uint32_t valueCount() { return uint32_t(512); } // 8^3 = 2^9
4079  // return the offset to the last value indexed by this leaf node (disregarding optional stats)
4080  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + 511u; } // 2^9 - 1
4081  // if stats are available, they are always placed after the last voxel value in this leaf node
4082  __hostdev__ uint64_t getMin() const { return this->hasStats() ? BaseT::mOffset + 512u : 0u; }
4083  __hostdev__ uint64_t getMax() const { return this->hasStats() ? BaseT::mOffset + 513u : 0u; }
4084  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? BaseT::mOffset + 514u : 0u; }
4085  __hostdev__ uint64_t getDev() const { return this->hasStats() ? BaseT::mOffset + 515u : 0u; }
4086  __hostdev__ uint64_t getValue(uint32_t i) const { return BaseT::mOffset + i; } // dense leaf node with active and inactive voxels
4087 }; // LeafData<ValueIndex>
4088 
4089 // --------------------------> LeafData<ValueOnIndex> <------------------------------------
4090 
4091 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4092 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4093  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4094 {
4097  __hostdev__ uint32_t valueCount() const
4098  {
4099  return util::countOn(BaseT::mValueMask.words()[7]) + (BaseT::mPrefixSum >> 54u & 511u); // last 9 bits of mPrefixSum do not account for the last word in mValueMask
4100  }
4101  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + this->valueCount() - 1u; }
4102  __hostdev__ uint64_t getMin() const { return this->hasStats() ? this->lastOffset() + 1u : 0u; }
4103  __hostdev__ uint64_t getMax() const { return this->hasStats() ? this->lastOffset() + 2u : 0u; }
4104  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? this->lastOffset() + 3u : 0u; }
4105  __hostdev__ uint64_t getDev() const { return this->hasStats() ? this->lastOffset() + 4u : 0u; }
4106  __hostdev__ uint64_t getValue(uint32_t i) const
4107  {
4108  //return mValueMask.isOn(i) ? mOffset + mValueMask.countOn(i) : 0u;// for debugging
4109  uint32_t n = i >> 6;
4110  const uint64_t w = BaseT::mValueMask.words()[n], mask = uint64_t(1) << (i & 63u);
4111  if (!(w & mask)) return uint64_t(0); // if i'th value is inactive return offset to background value
4112  uint64_t sum = BaseT::mOffset + util::countOn(w & (mask - 1u));
4113  if (n--) sum += BaseT::mPrefixSum >> (9u * n) & 511u;
4114  return sum;
4115  }
4116 }; // LeafData<ValueOnIndex>
4117 
4118 // --------------------------> LeafData<ValueIndexMask> <------------------------------------
4119 
4120 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4121 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndexMask, CoordT, MaskT, LOG2DIM>
4122  : public LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4123 {
4125  MaskT<LOG2DIM> mMask;
4126  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4127  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
4128  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
4129 }; // LeafData<ValueIndexMask>
4130 
4131 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4132 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndexMask, CoordT, MaskT, LOG2DIM>
4133  : public LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4134 {
4136  MaskT<LOG2DIM> mMask;
4137  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4138  __hostdev__ bool isMaskOn(uint32_t offset) const { return mMask.isOn(offset); }
4139  __hostdev__ void setMask(uint32_t offset, bool v) { mMask.set(offset, v); }
4140 }; // LeafData<ValueOnIndexMask>
4141 
4142 // --------------------------> LeafData<Point> <------------------------------------
4143 
4144 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4145 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Point, CoordT, MaskT, LOG2DIM>
4146 {
4147  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4148  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4149  using ValueType = uint64_t;
4150  using BuildType = Point;
4152  using ArrayType = uint16_t; // type used for the internal mValue array
4153  static constexpr bool FIXED_SIZE = true;
4154 
4155  CoordT mBBoxMin; // 12B.
4156  uint8_t mBBoxDif[3]; // 3B.
4157  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4158  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4159 
4160  uint64_t mOffset; // 8B
4161  uint64_t mPointCount; // 8B
4162  alignas(32) uint16_t mValues[1u << 3 * LOG2DIM]; // 1KB
4163  // no padding
4164 
4165  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
4166  ///
4167  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
4168  __hostdev__ static constexpr uint32_t padding()
4169  {
4170  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u + (1u << 3 * LOG2DIM) * 2u);
4171  }
4172  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4173 
4174  __hostdev__ uint64_t offset() const { return mOffset; }
4175  __hostdev__ uint64_t pointCount() const { return mPointCount; }
4176  __hostdev__ uint64_t first(uint32_t i) const { return i ? uint64_t(mValues[i - 1u]) + mOffset : mOffset; }
4177  __hostdev__ uint64_t last(uint32_t i) const { return uint64_t(mValues[i]) + mOffset; }
4178  __hostdev__ uint64_t getValue(uint32_t i) const { return uint64_t(mValues[i]); }
4179  __hostdev__ void setValueOnly(uint32_t offset, uint16_t value) { mValues[offset] = value; }
4180  __hostdev__ void setValue(uint32_t offset, uint16_t value)
4181  {
4182  mValueMask.setOn(offset);
4183  mValues[offset] = value;
4184  }
4185  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4186 
4187  __hostdev__ ValueType getMin() const { return mOffset; }
4188  __hostdev__ ValueType getMax() const { return mPointCount; }
4189  __hostdev__ FloatType getAvg() const { return 0.0f; }
4190  __hostdev__ FloatType getDev() const { return 0.0f; }
4191 
4192  __hostdev__ void setMin(const ValueType&) {}
4193  __hostdev__ void setMax(const ValueType&) {}
4194  __hostdev__ void setAvg(const FloatType&) {}
4195  __hostdev__ void setDev(const FloatType&) {}
4196 
4197  template<typename T>
4198  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4199 
4200  /// @brief This class cannot be constructed or deleted
4201  LeafData() = delete;
4202  LeafData(const LeafData&) = delete;
4203  LeafData& operator=(const LeafData&) = delete;
4204  ~LeafData() = delete;
4205 }; // LeafData<Point>
4206 
4207 // --------------------------> LeafNode<T> <------------------------------------
4208 
4209 /// @brief Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
4210 template<typename BuildT,
4211  typename CoordT = Coord,
4212  template<uint32_t> class MaskT = Mask,
4213  uint32_t Log2Dim = 3>
4214 class LeafNode : public LeafData<BuildT, CoordT, MaskT, Log2Dim>
4215 {
4216 public:
4218  {
4219  static constexpr uint32_t TOTAL = 0;
4220  static constexpr uint32_t DIM = 1;
4221  __hostdev__ static uint32_t dim() { return 1u; }
4222  }; // Voxel
4225  using ValueType = typename DataType::ValueType;
4226  using FloatType = typename DataType::FloatType;
4227  using BuildType = typename DataType::BuildType;
4228  using CoordType = CoordT;
4229  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
4230  template<uint32_t LOG2>
4231  using MaskType = MaskT<LOG2>;
4232  template<bool ON>
4233  using MaskIterT = typename Mask<Log2Dim>::template Iterator<ON>;
4234 
4235  /// @brief Visits all active values in a leaf node
4236  class ValueOnIterator : public MaskIterT<true>
4237  {
4238  using BaseT = MaskIterT<true>;
4239  const LeafNode* mParent;
4240 
4241  public:
4243  : BaseT()
4244  , mParent(nullptr)
4245  {
4246  }
4248  : BaseT(parent->data()->mValueMask.beginOn())
4249  , mParent(parent)
4250  {
4251  }
4252  ValueOnIterator& operator=(const ValueOnIterator&) = default;
4254  {
4255  NANOVDB_ASSERT(*this);
4256  return mParent->getValue(BaseT::pos());
4257  }
4258  __hostdev__ CoordT getCoord() const
4259  {
4260  NANOVDB_ASSERT(*this);
4261  return mParent->offsetToGlobalCoord(BaseT::pos());
4262  }
4263  }; // Member class ValueOnIterator
4264 
4265  __hostdev__ ValueOnIterator beginValueOn() const { return ValueOnIterator(this); }
4266  __hostdev__ ValueOnIterator cbeginValueOn() const { return ValueOnIterator(this); }
4267 
4268  /// @brief Visits all inactive values in a leaf node
4269  class ValueOffIterator : public MaskIterT<false>
4270  {
4271  using BaseT = MaskIterT<false>;
4272  const LeafNode* mParent;
4273 
4274  public:
4276  : BaseT()
4277  , mParent(nullptr)
4278  {
4279  }
4281  : BaseT(parent->data()->mValueMask.beginOff())
4282  , mParent(parent)
4283  {
4284  }
4285  ValueOffIterator& operator=(const ValueOffIterator&) = default;
4287  {
4288  NANOVDB_ASSERT(*this);
4289  return mParent->getValue(BaseT::pos());
4290  }
4291  __hostdev__ CoordT getCoord() const
4292  {
4293  NANOVDB_ASSERT(*this);
4294  return mParent->offsetToGlobalCoord(BaseT::pos());
4295  }
4296  }; // Member class ValueOffIterator
4297 
4298  __hostdev__ ValueOffIterator beginValueOff() const { return ValueOffIterator(this); }
4299  __hostdev__ ValueOffIterator cbeginValueOff() const { return ValueOffIterator(this); }
4300 
4301  /// @brief Visits all values in a leaf node, i.e. both active and inactive values
4303  {
4304  const LeafNode* mParent;
4305  uint32_t mPos;
4306 
4307  public:
4309  : mParent(nullptr)
4310  , mPos(1u << 3 * Log2Dim)
4311  {
4312  }
4314  : mParent(parent)
4315  , mPos(0)
4316  {
4317  NANOVDB_ASSERT(parent);
4318  }
4319  ValueIterator& operator=(const ValueIterator&) = default;
4321  {
4322  NANOVDB_ASSERT(*this);
4323  return mParent->getValue(mPos);
4324  }
4325  __hostdev__ CoordT getCoord() const
4326  {
4327  NANOVDB_ASSERT(*this);
4328  return mParent->offsetToGlobalCoord(mPos);
4329  }
4330  __hostdev__ bool isActive() const
4331  {
4332  NANOVDB_ASSERT(*this);
4333  return mParent->isActive(mPos);
4334  }
4335  __hostdev__ operator bool() const { return mPos < (1u << 3 * Log2Dim); }
4337  {
4338  ++mPos;
4339  return *this;
4340  }
4342  {
4343  auto tmp = *this;
4344  ++(*this);
4345  return tmp;
4346  }
4347  }; // Member class ValueIterator
4348 
4349  __hostdev__ ValueIterator beginValue() const { return ValueIterator(this); }
4350  __hostdev__ ValueIterator cbeginValueAll() const { return ValueIterator(this); }
4351 
4352  static_assert(util::is_same<ValueType, typename BuildToValueMap<BuildType>::Type>::value, "Mismatching BuildType");
4353  static constexpr uint32_t LOG2DIM = Log2Dim;
4354  static constexpr uint32_t TOTAL = LOG2DIM; // needed by parent nodes
4355  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
4356  static constexpr uint32_t SIZE = 1u << 3 * LOG2DIM; // total number of voxels represented by this node
4357  static constexpr uint32_t MASK = (1u << LOG2DIM) - 1u; // mask for bit operations
4358  static constexpr uint32_t LEVEL = 0; // level 0 = leaf
4359  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
4360 
4361  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
4362 
4363  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
4364 
4365  /// @brief Return a const reference to the bit mask of active voxels in this leaf node
4366  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
4367  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
4368 
4369  /// @brief Return a const reference to the minimum active value encoded in this leaf node
4370  __hostdev__ ValueType minimum() const { return DataType::getMin(); }
4371 
4372  /// @brief Return a const reference to the maximum active value encoded in this leaf node
4373  __hostdev__ ValueType maximum() const { return DataType::getMax(); }
4374 
4375  /// @brief Return a const reference to the average of all the active values encoded in this leaf node
4376  __hostdev__ FloatType average() const { return DataType::getAvg(); }
4377 
4378  /// @brief Return the variance of all the active values encoded in this leaf node
4379  __hostdev__ FloatType variance() const { return Pow2(DataType::getDev()); }
4380 
4381  /// @brief Return a const reference to the standard deviation of all the active values encoded in this leaf node
4382  __hostdev__ FloatType stdDeviation() const { return DataType::getDev(); }
4383 
4384  __hostdev__ uint8_t flags() const { return DataType::mFlags; }
4385 
4386  /// @brief Return the origin in index space of this leaf node
4387  __hostdev__ CoordT origin() const { return DataType::mBBoxMin & ~MASK; }
4388 
4389  /// @brief Compute the local coordinates from a linear offset
4390  /// @param n Linear offset into this nodes dense table
4391  /// @return Local (vs global) 3D coordinates
4392  __hostdev__ static CoordT OffsetToLocalCoord(uint32_t n)
4393  {
4394  NANOVDB_ASSERT(n < SIZE);
4395  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
4396  return CoordT(n >> 2 * LOG2DIM, m >> LOG2DIM, m & MASK);
4397  }
4398 
4399  /// @brief Converts (in place) a local index coordinate to a global index coordinate
4400  __hostdev__ void localToGlobalCoord(Coord& ijk) const { ijk += this->origin(); }
4401 
4402  __hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
4403  {
4404  return OffsetToLocalCoord(n) + this->origin();
4405  }
4406 
4407  /// @brief Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!)
4408  __hostdev__ static uint32_t dim() { return 1u << LOG2DIM; }
4409 
4410  /// @brief Return the bounding box in index space of active values in this leaf node
4411  __hostdev__ math::BBox<CoordT> bbox() const
4412  {
4413  math::BBox<CoordT> bbox(DataType::mBBoxMin, DataType::mBBoxMin);
4414  if (this->hasBBox()) {
4415  bbox.max()[0] += DataType::mBBoxDif[0];
4416  bbox.max()[1] += DataType::mBBoxDif[1];
4417  bbox.max()[2] += DataType::mBBoxDif[2];
4418  } else { // very rare case
4419  bbox = math::BBox<CoordT>(); // invalid
4420  }
4421  return bbox;
4422  }
4423 
4424  /// @brief Return the total number of voxels (e.g. values) encoded in this leaf node
4425  __hostdev__ static uint32_t voxelCount() { return 1u << (3 * LOG2DIM); }
4426 
4427  __hostdev__ static uint32_t padding() { return DataType::padding(); }
4428 
4429  /// @brief return memory usage in bytes for the leaf node
4430  __hostdev__ uint64_t memUsage() const { return DataType::memUsage(); }
4431 
4432  /// @brief This class cannot be constructed or deleted
4433  LeafNode() = delete;
4434  LeafNode(const LeafNode&) = delete;
4435  LeafNode& operator=(const LeafNode&) = delete;
4436  ~LeafNode() = delete;
4437 
4438  /// @brief Return the voxel value at the given offset.
4439  __hostdev__ ValueType getValue(uint32_t offset) const { return DataType::getValue(offset); }
4440 
4441  /// @brief Return the voxel value at the given coordinate.
4442  __hostdev__ ValueType getValue(const CoordT& ijk) const { return DataType::getValue(CoordToOffset(ijk)); }
4443 
4444  /// @brief Return the first value in this leaf node.
4445  __hostdev__ ValueType getFirstValue() const { return this->getValue(0); }
4446  /// @brief Return the last value in this leaf node.
4447  __hostdev__ ValueType getLastValue() const { return this->getValue(SIZE - 1); }
4448 
4449  /// @brief Sets the value at the specified location and activate its state.
4450  ///
4451  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4452  __hostdev__ void setValue(const CoordT& ijk, const ValueType& v) { DataType::setValue(CoordToOffset(ijk), v); }
4453 
4454  /// @brief Sets the value at the specified location but leaves its state unchanged.
4455  ///
4456  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4457  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& v) { DataType::setValueOnly(offset, v); }
4458  __hostdev__ void setValueOnly(const CoordT& ijk, const ValueType& v) { DataType::setValueOnly(CoordToOffset(ijk), v); }
4459 
4460  /// @brief Return @c true if the voxel value at the given coordinate is active.
4461  __hostdev__ bool isActive(const CoordT& ijk) const { return DataType::mValueMask.isOn(CoordToOffset(ijk)); }
4462  __hostdev__ bool isActive(uint32_t n) const { return DataType::mValueMask.isOn(n); }
4463 
4464  /// @brief Return @c true if any of the voxel value are active in this leaf node.
4465  __hostdev__ bool isActive() const
4466  {
4467  //NANOVDB_ASSERT( bool(DataType::mFlags & uint8_t(2)) != DataType::mValueMask.isOff() );
4468  //return DataType::mFlags & uint8_t(2);
4469  return !DataType::mValueMask.isOff();
4470  }
4471 
4472  __hostdev__ bool hasBBox() const { return DataType::mFlags & uint8_t(2); }
4473 
4474  /// @brief Return @c true if the voxel value at the given coordinate is active and updates @c v with the value.
4475  __hostdev__ bool probeValue(const CoordT& ijk, ValueType& v) const
4476  {
4477  const uint32_t n = CoordToOffset(ijk);
4478  v = DataType::getValue(n);
4479  return DataType::mValueMask.isOn(n);
4480  }
4481 
4482  __hostdev__ const LeafNode* probeLeaf(const CoordT&) const { return this; }
4483 
4484  /// @brief Return the linear offset corresponding to the given coordinate
4485  __hostdev__ static uint32_t CoordToOffset(const CoordT& ijk)
4486  {
4487  return ((ijk[0] & MASK) << (2 * LOG2DIM)) | ((ijk[1] & MASK) << LOG2DIM) | (ijk[2] & MASK);
4488  }
4489 
4490  /// @brief Updates the local bounding box of active voxels in this node. Return true if bbox was updated.
4491  ///
4492  /// @warning It assumes that the origin and value mask have already been set.
4493  ///
4494  /// @details This method is based on few (intrinsic) bit operations and hence is relatively fast.
4495  /// However, it should only only be called if either the value mask has changed or if the
4496  /// active bounding box is still undefined. e.g. during construction of this node.
4497  __hostdev__ bool updateBBox();
4498 
4499  template<typename OpT, typename... ArgsT>
4500  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4501  {
4502  return OpT::get(*this, CoordToOffset(ijk), args...);
4503  }
4504 
4505  template<typename OpT, typename... ArgsT>
4506  __hostdev__ auto get(const uint32_t n, ArgsT&&... args) const
4507  {
4508  return OpT::get(*this, n, args...);
4509  }
4510 
4511  template<typename OpT, typename... ArgsT>
4512  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
4513  {
4514  return OpT::set(*this, CoordToOffset(ijk), args...);
4515  }
4516 
4517  template<typename OpT, typename... ArgsT>
4518  __hostdev__ auto set(const uint32_t n, ArgsT&&... args)
4519  {
4520  return OpT::set(*this, n, args...);
4521  }
4522 
4523 private:
4524  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(LeafData) is misaligned");
4525 
4526  template<typename, int, int, int>
4527  friend class ReadAccessor;
4528 
4529  template<typename>
4530  friend class RootNode;
4531  template<typename, uint32_t>
4532  friend class InternalNode;
4533 
4534  template<typename RayT, typename AccT>
4535  __hostdev__ uint32_t getDimAndCache(const CoordT&, const RayT& /*ray*/, const AccT&) const
4536  {
4537  if (DataType::mFlags & uint8_t(1u))
4538  return this->dim(); // skip this node if the 1st bit is set
4539 
4540  //if (!ray.intersects( this->bbox() )) return 1 << LOG2DIM;
4541  return ChildNodeType::dim();
4542  }
4543 
4544  template<typename OpT, typename AccT, typename... ArgsT>
4545  __hostdev__ auto
4546  //__hostdev__ decltype(OpT::get(util::declval<const LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4547  getAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args) const
4548  {
4549  return OpT::get(*this, CoordToOffset(ijk), args...);
4550  }
4551 
4552  template<typename OpT, typename AccT, typename... ArgsT>
4553  //__hostdev__ auto // occasionally fails with NVCC
4554  __hostdev__ decltype(OpT::set(util::declval<LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4555  setAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args)
4556  {
4557  return OpT::set(*this, CoordToOffset(ijk), args...);
4558  }
4559 
4560 }; // LeafNode class
4561 
4562 // --------------------------> LeafNode<T>::updateBBox <------------------------------------
4563 
4564 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4566 {
4567  static_assert(LOG2DIM == 3, "LeafNode::updateBBox: only supports LOGDIM = 3!");
4568  if (DataType::mValueMask.isOff()) {
4569  DataType::mFlags &= ~uint8_t(2); // set 2nd bit off, which indicates that this nodes has no bbox
4570  return false;
4571  }
4572  auto update = [&](uint32_t min, uint32_t max, int axis) {
4573  NANOVDB_ASSERT(min <= max && max < 8);
4574  DataType::mBBoxMin[axis] = (DataType::mBBoxMin[axis] & ~MASK) + int(min);
4575  DataType::mBBoxDif[axis] = uint8_t(max - min);
4576  };
4577  uint64_t *w = DataType::mValueMask.words(), word64 = *w;
4578  uint32_t Xmin = word64 ? 0u : 8u, Xmax = Xmin;
4579  for (int i = 1; i < 8; ++i) { // last loop over 7 remaining 64 bit words
4580  if (w[i]) { // skip if word has no set bits
4581  word64 |= w[i]; // union 8 x 64 bits words into one 64 bit word
4582  if (Xmin == 8)
4583  Xmin = i; // only set once
4584  Xmax = i;
4585  }
4586  }
4587  NANOVDB_ASSERT(word64);
4588  update(Xmin, Xmax, 0);
4589  update(util::findLowestOn(word64) >> 3, util::findHighestOn(word64) >> 3, 1);
4590  const uint32_t *p = reinterpret_cast<const uint32_t*>(&word64), word32 = p[0] | p[1];
4591  const uint16_t *q = reinterpret_cast<const uint16_t*>(&word32), word16 = q[0] | q[1];
4592  const uint8_t *b = reinterpret_cast<const uint8_t*>(&word16), byte = b[0] | b[1];
4593  NANOVDB_ASSERT(byte);
4594  update(util::findLowestOn(static_cast<uint32_t>(byte)), util::findHighestOn(static_cast<uint32_t>(byte)), 2);
4595  DataType::mFlags |= uint8_t(2); // set 2nd bit on, which indicates that this nodes has a bbox
4596  return true;
4597 } // LeafNode::updateBBox
4598 
4599 // --------------------------> Template specializations and traits <------------------------------------
4600 
4601 /// @brief Template specializations to the default configuration used in OpenVDB:
4602 /// Root -> 32^3 -> 16^3 -> 8^3
4603 template<typename BuildT>
4605 template<typename BuildT>
4607 template<typename BuildT>
4609 template<typename BuildT>
4611 template<typename BuildT>
4613 template<typename BuildT>
4615 
4616 /// @brief Trait to map from LEVEL to node type
4617 template<typename BuildT, int LEVEL>
4618 struct NanoNode;
4619 
4620 // Partial template specialization of above Node struct
4621 template<typename BuildT>
4622 struct NanoNode<BuildT, 0>
4623 {
4626 };
4627 template<typename BuildT>
4628 struct NanoNode<BuildT, 1>
4629 {
4632 };
4633 template<typename BuildT>
4634 struct NanoNode<BuildT, 2>
4635 {
4638 };
4639 template<typename BuildT>
4640 struct NanoNode<BuildT, 3>
4641 {
4644 };
4645 
4666 
4688 
4689 // --------------------------> callNanoGrid <------------------------------------
4690 
4691 /**
4692 * @brief Below is an example of the struct used for generic programming with callNanoGrid
4693 * @details For an example see "struct Crc32TailOld" in nanovdb/tools/GridChecksum.h or
4694 * "struct IsNanoGridValid" in nanovdb/tools/GridValidator.h
4695 * @code
4696 * struct OpT {
4697  // define these two static functions with non-const GridData
4698 * template <typename BuildT>
4699 * static auto known( GridData *gridData, args...);
4700 * static auto unknown( GridData *gridData, args...);
4701 * // or alternatively these two static functions with const GridData
4702 * template <typename BuildT>
4703 * static auto known(const GridData *gridData, args...);
4704 * static auto unknown(const GridData *gridData, args...);
4705 * };
4706 * @endcode
4707 *
4708 * @brief Here is an example of how to use callNanoGrid in client code
4709 * @code
4710 * return callNanoGrid<OpT>(gridData, args...);
4711 * @endcode
4712 */
4713 
4714 /// @brief Use this function, which depends a pointer to GridData, to call
4715 /// other functions that depend on a NanoGrid of a known ValueType.
4716 /// @details This function allows for generic programming by converting GridData
4717 /// to a NanoGrid of the type encoded in GridData::mGridType.
4718 template<typename OpT, typename GridDataT, typename... ArgsT>
4719 auto callNanoGrid(GridDataT *gridData, ArgsT&&... args)
4720 {
4721  static_assert(util::is_same<GridDataT, GridData, const GridData>::value, "Expected gridData to be of type GridData* or const GridData*");
4722  switch (gridData->mGridType){
4723  case GridType::Float:
4724  return OpT::template known<float>(gridData, args...);
4725  case GridType::Double:
4726  return OpT::template known<double>(gridData, args...);
4727  case GridType::Int16:
4728  return OpT::template known<int16_t>(gridData, args...);
4729  case GridType::Int32:
4730  return OpT::template known<int32_t>(gridData, args...);
4731  case GridType::Int64:
4732  return OpT::template known<int64_t>(gridData, args...);
4733  case GridType::Vec3f:
4734  return OpT::template known<Vec3f>(gridData, args...);
4735  case GridType::Vec3d:
4736  return OpT::template known<Vec3d>(gridData, args...);
4737  case GridType::UInt32:
4738  return OpT::template known<uint32_t>(gridData, args...);
4739  case GridType::Mask:
4740  return OpT::template known<ValueMask>(gridData, args...);
4741  case GridType::Index:
4742  return OpT::template known<ValueIndex>(gridData, args...);
4743  case GridType::OnIndex:
4744  return OpT::template known<ValueOnIndex>(gridData, args...);
4745  case GridType::IndexMask:
4746  return OpT::template known<ValueIndexMask>(gridData, args...);
4747  case GridType::OnIndexMask:
4748  return OpT::template known<ValueOnIndexMask>(gridData, args...);
4749  case GridType::Boolean:
4750  return OpT::template known<bool>(gridData, args...);
4751  case GridType::RGBA8:
4752  return OpT::template known<math::Rgba8>(gridData, args...);
4753  case GridType::Fp4:
4754  return OpT::template known<Fp4>(gridData, args...);
4755  case GridType::Fp8:
4756  return OpT::template known<Fp8>(gridData, args...);
4757  case GridType::Fp16:
4758  return OpT::template known<Fp16>(gridData, args...);
4759  case GridType::FpN:
4760  return OpT::template known<FpN>(gridData, args...);
4761  case GridType::Vec4f:
4762  return OpT::template known<Vec4f>(gridData, args...);
4763  case GridType::Vec4d:
4764  return OpT::template known<Vec4d>(gridData, args...);
4765  case GridType::UInt8:
4766  return OpT::template known<uint8_t>(gridData, args...);
4767  default:
4768  return OpT::unknown(gridData, args...);
4769  }
4770 }// callNanoGrid
4771 
4772 // --------------------------> ReadAccessor <------------------------------------
4773 
4774 /// @brief A read-only value accessor with three levels of node caching. This allows for
4775 /// inverse tree traversal during lookup, which is on average significantly faster
4776 /// than calling the equivalent method on the tree (i.e. top-down traversal).
4777 ///
4778 /// @note By virtue of the fact that a value accessor accelerates random access operations
4779 /// by re-using cached access patterns, this access should be reused for multiple access
4780 /// operations. In other words, never create an instance of this accessor for a single
4781 /// access only. In general avoid single access operations with this accessor, and
4782 /// if that is not possible call the corresponding method on the tree instead.
4783 ///
4784 /// @warning Since this ReadAccessor internally caches raw pointers to the nodes of the tree
4785 /// structure, it is not safe to copy between host and device, or even to share among
4786 /// multiple threads on the same host or device. However, it is light-weight so simple
4787 /// instantiate one per thread (on the host and/or device).
4788 ///
4789 /// @details Used to accelerated random access into a VDB tree. Provides on average
4790 /// O(1) random access operations by means of inverse tree traversal,
4791 /// which amortizes the non-const time complexity of the root node.
4792 
4793 template<typename BuildT>
4794 class ReadAccessor<BuildT, -1, -1, -1>
4795 {
4796  using GridT = NanoGrid<BuildT>; // grid
4797  using TreeT = NanoTree<BuildT>; // tree
4798  using RootT = NanoRoot<BuildT>; // root node
4799  using LeafT = NanoLeaf<BuildT>; // Leaf node
4800  using FloatType = typename RootT::FloatType;
4801  using CoordValueType = typename RootT::CoordType::ValueType;
4802 
4803  mutable const RootT* mRoot; // 8 bytes (mutable to allow for access methods to be const)
4804 public:
4805  using BuildType = BuildT;
4806  using ValueType = typename RootT::ValueType;
4807  using CoordType = typename RootT::CoordType;
4808 
4809  static const int CacheLevels = 0;
4810 
4811  /// @brief Constructor from a root node
4813  : mRoot{&root}
4814  {
4815  }
4816 
4817  /// @brief Constructor from a grid
4818  __hostdev__ ReadAccessor(const GridT& grid)
4819  : ReadAccessor(grid.tree().root())
4820  {
4821  }
4822 
4823  /// @brief Constructor from a tree
4825  : ReadAccessor(tree.root())
4826  {
4827  }
4828 
4829  /// @brief Reset this access to its initial state, i.e. with an empty cache
4830  /// @node Noop since this template specialization has no cache
4831  __hostdev__ void clear() {}
4832 
4833  __hostdev__ const RootT& root() const { return *mRoot; }
4834 
4835  /// @brief Defaults constructors
4836  ReadAccessor(const ReadAccessor&) = default;
4837  ~ReadAccessor() = default;
4838  ReadAccessor& operator=(const ReadAccessor&) = default;
4840  {
4841  return this->template get<GetValue<BuildT>>(ijk);
4842  }
4843  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4844  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4845  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4846  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4847  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4848  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4849  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4850  template<typename RayT>
4851  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
4852  {
4853  return mRoot->getDimAndCache(ijk, ray, *this);
4854  }
4855  template<typename OpT, typename... ArgsT>
4856  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4857  {
4858  return mRoot->template get<OpT>(ijk, args...);
4859  }
4860 
4861  template<typename OpT, typename... ArgsT>
4862  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
4863  {
4864  return const_cast<RootT*>(mRoot)->template set<OpT>(ijk, args...);
4865  }
4866 
4867 private:
4868  /// @brief Allow nodes to insert themselves into the cache.
4869  template<typename>
4870  friend class RootNode;
4871  template<typename, uint32_t>
4872  friend class InternalNode;
4873  template<typename, typename, template<uint32_t> class, uint32_t>
4874  friend class LeafNode;
4875 
4876  /// @brief No-op
4877  template<typename NodeT>
4878  __hostdev__ void insert(const CoordType&, const NodeT*) const {}
4879 }; // ReadAccessor<ValueT, -1, -1, -1> class
4880 
4881 /// @brief Node caching at a single tree level
4882 template<typename BuildT, int LEVEL0>
4883 class ReadAccessor<BuildT, LEVEL0, -1, -1> //e.g. 0, 1, 2
4884 {
4885  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 should be 0, 1, or 2");
4886 
4887  using GridT = NanoGrid<BuildT>; // grid
4888  using TreeT = NanoTree<BuildT>;
4889  using RootT = NanoRoot<BuildT>; // root node
4890  using LeafT = NanoLeaf<BuildT>; // Leaf node
4891  using NodeT = typename NodeTrait<TreeT, LEVEL0>::type;
4892  using CoordT = typename RootT::CoordType;
4893  using ValueT = typename RootT::ValueType;
4894 
4895  using FloatType = typename RootT::FloatType;
4896  using CoordValueType = typename RootT::CoordT::ValueType;
4897 
4898  // All member data are mutable to allow for access methods to be const
4899  mutable CoordT mKey; // 3*4 = 12 bytes
4900  mutable const RootT* mRoot; // 8 bytes
4901  mutable const NodeT* mNode; // 8 bytes
4902 
4903 public:
4904  using BuildType = BuildT;
4905  using ValueType = ValueT;
4906  using CoordType = CoordT;
4907 
4908  static const int CacheLevels = 1;
4909 
4910  /// @brief Constructor from a root node
4912  : mKey(CoordType::max())
4913  , mRoot(&root)
4914  , mNode(nullptr)
4915  {
4916  }
4917 
4918  /// @brief Constructor from a grid
4920  : ReadAccessor(grid.tree().root())
4921  {
4922  }
4923 
4924  /// @brief Constructor from a tree
4926  : ReadAccessor(tree.root())
4927  {
4928  }
4929 
4930  /// @brief Reset this access to its initial state, i.e. with an empty cache
4932  {
4933  mKey = CoordType::max();
4934  mNode = nullptr;
4935  }
4936 
4937  __hostdev__ const RootT& root() const { return *mRoot; }
4938 
4939  /// @brief Defaults constructors
4940  ReadAccessor(const ReadAccessor&) = default;
4941  ~ReadAccessor() = default;
4942  ReadAccessor& operator=(const ReadAccessor&) = default;
4943 
4944  __hostdev__ bool isCached(const CoordType& ijk) const
4945  {
4946  return (ijk[0] & int32_t(~NodeT::MASK)) == mKey[0] &&
4947  (ijk[1] & int32_t(~NodeT::MASK)) == mKey[1] &&
4948  (ijk[2] & int32_t(~NodeT::MASK)) == mKey[2];
4949  }
4950 
4952  {
4953  return this->template get<GetValue<BuildT>>(ijk);
4954  }
4955  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4956  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4957  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4958  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4959  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4960  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4961  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4962 
4963  template<typename RayT>
4964  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
4965  {
4966  if (this->isCached(ijk)) return mNode->getDimAndCache(ijk, ray, *this);
4967  return mRoot->getDimAndCache(ijk, ray, *this);
4968  }
4969 
4970  template<typename OpT, typename... ArgsT>
4971  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
4972  {
4973  if constexpr(OpT::LEVEL <= LEVEL0) if (this->isCached(ijk)) return mNode->template getAndCache<OpT>(ijk, *this, args...);
4974  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
4975  }
4976 
4977  template<typename OpT, typename... ArgsT>
4978  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
4979  {
4980  if constexpr(OpT::LEVEL <= LEVEL0) if (this->isCached(ijk)) return const_cast<NodeT*>(mNode)->template setAndCache<OpT>(ijk, *this, args...);
4981  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
4982  }
4983 
4984 private:
4985  /// @brief Allow nodes to insert themselves into the cache.
4986  template<typename>
4987  friend class RootNode;
4988  template<typename, uint32_t>
4989  friend class InternalNode;
4990  template<typename, typename, template<uint32_t> class, uint32_t>
4991  friend class LeafNode;
4992 
4993  /// @brief Inserts a leaf node and key pair into this ReadAccessor
4994  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
4995  {
4996  mKey = ijk & ~NodeT::MASK;
4997  mNode = node;
4998  }
4999 
5000  // no-op
5001  template<typename OtherNodeT>
5002  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5003 
5004 }; // ReadAccessor<ValueT, LEVEL0>
5005 
5006 template<typename BuildT, int LEVEL0, int LEVEL1>
5007 class ReadAccessor<BuildT, LEVEL0, LEVEL1, -1> //e.g. (0,1), (1,2), (0,2)
5008 {
5009  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 must be 0, 1, 2");
5010  static_assert(LEVEL1 >= 0 && LEVEL1 <= 2, "LEVEL1 must be 0, 1, 2");
5011  static_assert(LEVEL0 < LEVEL1, "Level 0 must be lower than level 1");
5012  using GridT = NanoGrid<BuildT>; // grid
5013  using TreeT = NanoTree<BuildT>;
5014  using RootT = NanoRoot<BuildT>;
5015  using LeafT = NanoLeaf<BuildT>;
5016  using Node1T = typename NodeTrait<TreeT, LEVEL0>::type;
5017  using Node2T = typename NodeTrait<TreeT, LEVEL1>::type;
5018  using CoordT = typename RootT::CoordType;
5019  using ValueT = typename RootT::ValueType;
5020  using FloatType = typename RootT::FloatType;
5021  using CoordValueType = typename RootT::CoordT::ValueType;
5022 
5023  // All member data are mutable to allow for access methods to be const
5024 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5025  mutable CoordT mKey; // 3*4 = 12 bytes
5026 #else // 68 bytes total
5027  mutable CoordT mKeys[2]; // 2*3*4 = 24 bytes
5028 #endif
5029  mutable const RootT* mRoot;
5030  mutable const Node1T* mNode1;
5031  mutable const Node2T* mNode2;
5032 
5033 public:
5034  using BuildType = BuildT;
5035  using ValueType = ValueT;
5036  using CoordType = CoordT;
5037 
5038  static const int CacheLevels = 2;
5039 
5040  /// @brief Constructor from a root node
5042 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5043  : mKey(CoordType::max())
5044 #else
5045  : mKeys{CoordType::max(), CoordType::max()}
5046 #endif
5047  , mRoot(&root)
5048  , mNode1(nullptr)
5049  , mNode2(nullptr)
5050  {
5051  }
5052 
5053  /// @brief Constructor from a grid
5055  : ReadAccessor(grid.tree().root())
5056  {
5057  }
5058 
5059  /// @brief Constructor from a tree
5061  : ReadAccessor(tree.root())
5062  {
5063  }
5064 
5065  /// @brief Reset this access to its initial state, i.e. with an empty cache
5067  {
5068 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5069  mKey = CoordType::max();
5070 #else
5071  mKeys[0] = mKeys[1] = CoordType::max();
5072 #endif
5073  mNode1 = nullptr;
5074  mNode2 = nullptr;
5075  }
5076 
5077  __hostdev__ const RootT& root() const { return *mRoot; }
5078 
5079  /// @brief Defaults constructors
5080  ReadAccessor(const ReadAccessor&) = default;
5081  ~ReadAccessor() = default;
5082  ReadAccessor& operator=(const ReadAccessor&) = default;
5083 
5084 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5085  __hostdev__ bool isCached1(CoordValueType dirty) const
5086  {
5087  if (!mNode1)
5088  return false;
5089  if (dirty & int32_t(~Node1T::MASK)) {
5090  mNode1 = nullptr;
5091  return false;
5092  }
5093  return true;
5094  }
5095  __hostdev__ bool isCached2(CoordValueType dirty) const
5096  {
5097  if (!mNode2)
5098  return false;
5099  if (dirty & int32_t(~Node2T::MASK)) {
5100  mNode2 = nullptr;
5101  return false;
5102  }
5103  return true;
5104  }
5105  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5106  {
5107  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5108  }
5109 #else
5110  __hostdev__ bool isCached1(const CoordType& ijk) const
5111  {
5112  return (ijk[0] & int32_t(~Node1T::MASK)) == mKeys[0][0] &&
5113  (ijk[1] & int32_t(~Node1T::MASK)) == mKeys[0][1] &&
5114  (ijk[2] & int32_t(~Node1T::MASK)) == mKeys[0][2];
5115  }
5116  __hostdev__ bool isCached2(const CoordType& ijk) const
5117  {
5118  return (ijk[0] & int32_t(~Node2T::MASK)) == mKeys[1][0] &&
5119  (ijk[1] & int32_t(~Node2T::MASK)) == mKeys[1][1] &&
5120  (ijk[2] & int32_t(~Node2T::MASK)) == mKeys[1][2];
5121  }
5122 #endif
5123 
5125  {
5126  return this->template get<GetValue<BuildT>>(ijk);
5127  }
5128  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5129  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5130  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5131  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5132  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5133  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5134  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5135 
5136  template<typename RayT>
5137  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5138  {
5139 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5140  const CoordValueType dirty = this->computeDirty(ijk);
5141 #else
5142  auto&& dirty = ijk;
5143 #endif
5144  if (this->isCached1(dirty)) {
5145  return mNode1->getDimAndCache(ijk, ray, *this);
5146  } else if (this->isCached2(dirty)) {
5147  return mNode2->getDimAndCache(ijk, ray, *this);
5148  }
5149  return mRoot->getDimAndCache(ijk, ray, *this);
5150  }
5151 
5152  template<typename OpT, typename... ArgsT>
5153  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
5154  {
5155 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5156  const CoordValueType dirty = this->computeDirty(ijk);
5157 #else
5158  auto&& dirty = ijk;
5159 #endif
5160  if constexpr(OpT::LEVEL <= LEVEL0) {
5161  if (this->isCached1(dirty)) return mNode1->template getAndCache<OpT>(ijk, *this, args...);
5162  } else if constexpr(OpT::LEVEL <= LEVEL1) {
5163  if (this->isCached2(dirty)) return mNode2->template getAndCache<OpT>(ijk, *this, args...);
5164  }
5165  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5166  }
5167 
5168  template<typename OpT, typename... ArgsT>
5169  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
5170  {
5171 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5172  const CoordValueType dirty = this->computeDirty(ijk);
5173 #else
5174  auto&& dirty = ijk;
5175 #endif
5176  if constexpr(OpT::LEVEL <= LEVEL0) {
5177  if (this->isCached1(dirty)) return const_cast<Node1T*>(mNode1)->template setAndCache<OpT>(ijk, *this, args...);
5178  } else if constexpr(OpT::LEVEL <= LEVEL1) {
5179  if (this->isCached2(dirty)) return const_cast<Node2T*>(mNode2)->template setAndCache<OpT>(ijk, *this, args...);
5180  }
5181  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5182  }
5183 
5184 private:
5185  /// @brief Allow nodes to insert themselves into the cache.
5186  template<typename>
5187  friend class RootNode;
5188  template<typename, uint32_t>
5189  friend class InternalNode;
5190  template<typename, typename, template<uint32_t> class, uint32_t>
5191  friend class LeafNode;
5192 
5193  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5194  __hostdev__ void insert(const CoordType& ijk, const Node1T* node) const
5195  {
5196 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5197  mKey = ijk;
5198 #else
5199  mKeys[0] = ijk & ~Node1T::MASK;
5200 #endif
5201  mNode1 = node;
5202  }
5203  __hostdev__ void insert(const CoordType& ijk, const Node2T* node) const
5204  {
5205 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5206  mKey = ijk;
5207 #else
5208  mKeys[1] = ijk & ~Node2T::MASK;
5209 #endif
5210  mNode2 = node;
5211  }
5212  template<typename OtherNodeT>
5213  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5214 }; // ReadAccessor<BuildT, LEVEL0, LEVEL1>
5215 
5216 /// @brief Node caching at all (three) tree levels
5217 template<typename BuildT>
5218 class ReadAccessor<BuildT, 0, 1, 2>
5219 {
5220  using GridT = NanoGrid<BuildT>; // grid
5221  using TreeT = NanoTree<BuildT>;
5222  using RootT = NanoRoot<BuildT>; // root node
5223  using NodeT2 = NanoUpper<BuildT>; // upper internal node
5224  using NodeT1 = NanoLower<BuildT>; // lower internal node
5225  using LeafT = NanoLeaf<BuildT>; // Leaf node
5226  using CoordT = typename RootT::CoordType;
5227  using ValueT = typename RootT::ValueType;
5228 
5229  using FloatType = typename RootT::FloatType;
5230  using CoordValueType = typename RootT::CoordT::ValueType;
5231 
5232  // All member data are mutable to allow for access methods to be const
5233 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5234  mutable CoordT mKey; // 3*4 = 12 bytes
5235 #else // 68 bytes total
5236  mutable CoordT mKeys[3]; // 3*3*4 = 36 bytes
5237 #endif
5238  mutable const RootT* mRoot;
5239  mutable const void* mNode[3]; // 4*8 = 32 bytes
5240 
5241 public:
5242  using BuildType = BuildT;
5243  using ValueType = ValueT;
5244  using CoordType = CoordT;
5245 
5246  static const int CacheLevels = 3;
5247 
5248  /// @brief Constructor from a root node
5250 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5251  : mKey(CoordType::max())
5252 #else
5254 #endif
5255  , mRoot(&root)
5256  , mNode{nullptr, nullptr, nullptr}
5257  {
5258  }
5259 
5260  /// @brief Constructor from a grid
5261  __hostdev__ ReadAccessor(const GridT& grid)
5262  : ReadAccessor(grid.tree().root())
5263  {
5264  }
5265 
5266  /// @brief Constructor from a tree
5268  : ReadAccessor(tree.root())
5269  {
5270  }
5271 
5272  __hostdev__ const RootT& root() const { return *mRoot; }
5273 
5274  /// @brief Defaults constructors
5275  ReadAccessor(const ReadAccessor&) = default;
5276  ~ReadAccessor() = default;
5277  ReadAccessor& operator=(const ReadAccessor&) = default;
5278 
5279  /// @brief Return a const point to the cached node of the specified type
5280  ///
5281  /// @warning The return value could be NULL.
5282  template<typename NodeT>
5283  __hostdev__ const NodeT* getNode() const
5284  {
5285  using T = typename NodeTrait<TreeT, NodeT::LEVEL>::type;
5286  static_assert(util::is_same<T, NodeT>::value, "ReadAccessor::getNode: Invalid node type");
5287  return reinterpret_cast<const T*>(mNode[NodeT::LEVEL]);
5288  }
5289 
5290  template<int LEVEL>
5292  {
5293  using T = typename NodeTrait<TreeT, LEVEL>::type;
5294  static_assert(LEVEL >= 0 && LEVEL <= 2, "ReadAccessor::getNode: Invalid node type");
5295  return reinterpret_cast<const T*>(mNode[LEVEL]);
5296  }
5297 
5298  /// @brief Reset this access to its initial state, i.e. with an empty cache
5300  {
5301 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5302  mKey = CoordType::max();
5303 #else
5304  mKeys[0] = mKeys[1] = mKeys[2] = CoordType::max();
5305 #endif
5306  mNode[0] = mNode[1] = mNode[2] = nullptr;
5307  }
5308 
5309 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5310  template<typename NodeT>
5311  __hostdev__ bool isCached(CoordValueType dirty) const
5312  {
5313  if (!mNode[NodeT::LEVEL])
5314  return false;
5315  if (dirty & int32_t(~NodeT::MASK)) {
5316  mNode[NodeT::LEVEL] = nullptr;
5317  return false;
5318  }
5319  return true;
5320  }
5321 
5322  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5323  {
5324  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5325  }
5326 #else
5327  template<typename NodeT>
5328  __hostdev__ bool isCached(const CoordType& ijk) const
5329  {
5330  return (ijk[0] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][0] &&
5331  (ijk[1] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][1] &&
5332  (ijk[2] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][2];
5333  }
5334 #endif
5335 
5336  __hostdev__ ValueType getValue(const CoordType& ijk) const {return this->template get<GetValue<BuildT>>(ijk);}
5337  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5338  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5339  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5340  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5341  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5342  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5343  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5344 
5345  template<typename OpT, typename... ArgsT>
5346  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
5347  {
5348 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5349  const CoordValueType dirty = this->computeDirty(ijk);
5350 #else
5351  auto&& dirty = ijk;
5352 #endif
5353  if constexpr(OpT::LEVEL <=0) {
5354  if (this->isCached<LeafT>(dirty)) return ((const LeafT*)mNode[0])->template getAndCache<OpT>(ijk, *this, args...);
5355  } else if constexpr(OpT::LEVEL <= 1) {
5356  if (this->isCached<NodeT1>(dirty)) return ((const NodeT1*)mNode[1])->template getAndCache<OpT>(ijk, *this, args...);
5357  } else if constexpr(OpT::LEVEL <= 2) {
5358  if (this->isCached<NodeT2>(dirty)) return ((const NodeT2*)mNode[2])->template getAndCache<OpT>(ijk, *this, args...);
5359  }
5360  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5361  }
5362 
5363  template<typename OpT, typename... ArgsT>
5364  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
5365  {
5366 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5367  const CoordValueType dirty = this->computeDirty(ijk);
5368 #else
5369  auto&& dirty = ijk;
5370 #endif
5371  if constexpr(OpT::LEVEL <= 0) {
5372  if (this->isCached<LeafT>(dirty)) return ((LeafT*)mNode[0])->template setAndCache<OpT>(ijk, *this, args...);
5373  } else if constexpr(OpT::LEVEL <= 1) {
5374  if (this->isCached<NodeT1>(dirty)) return ((NodeT1*)mNode[1])->template setAndCache<OpT>(ijk, *this, args...);
5375  } else if constexpr(OpT::LEVEL <= 2) {
5376  if (this->isCached<NodeT2>(dirty)) return ((NodeT2*)mNode[2])->template setAndCache<OpT>(ijk, *this, args...);
5377  }
5378  return ((RootT*)mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5379  }
5380 
5381  template<typename RayT>
5382  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5383  {
5384 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5385  const CoordValueType dirty = this->computeDirty(ijk);
5386 #else
5387  auto&& dirty = ijk;
5388 #endif
5389  if (this->isCached<LeafT>(dirty)) {
5390  return ((LeafT*)mNode[0])->getDimAndCache(ijk, ray, *this);
5391  } else if (this->isCached<NodeT1>(dirty)) {
5392  return ((NodeT1*)mNode[1])->getDimAndCache(ijk, ray, *this);
5393  } else if (this->isCached<NodeT2>(dirty)) {
5394  return ((NodeT2*)mNode[2])->getDimAndCache(ijk, ray, *this);
5395  }
5396  return mRoot->getDimAndCache(ijk, ray, *this);
5397  }
5398 
5399 private:
5400  /// @brief Allow nodes to insert themselves into the cache.
5401  template<typename>
5402  friend class RootNode;
5403  template<typename, uint32_t>
5404  friend class InternalNode;
5405  template<typename, typename, template<uint32_t> class, uint32_t>
5406  friend class LeafNode;
5407 
5408  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5409  template<typename NodeT>
5410  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
5411  {
5412 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5413  mKey = ijk;
5414 #else
5415  mKeys[NodeT::LEVEL] = ijk & ~NodeT::MASK;
5416 #endif
5417  mNode[NodeT::LEVEL] = node;
5418  }
5419 }; // ReadAccessor<BuildT, 0, 1, 2>
5420 
5421 //////////////////////////////////////////////////
5422 
5423 /// @brief Free-standing function for convenient creation of a ReadAccessor with
5424 /// optional and customizable node caching.
5425 ///
5426 /// @details createAccessor<>(grid): No caching of nodes and hence it's thread-safe but slow
5427 /// createAccessor<0>(grid): Caching of leaf nodes only
5428 /// createAccessor<1>(grid): Caching of lower internal nodes only
5429 /// createAccessor<2>(grid): Caching of upper internal nodes only
5430 /// createAccessor<0,1>(grid): Caching of leaf and lower internal nodes
5431 /// createAccessor<0,2>(grid): Caching of leaf and upper internal nodes
5432 /// createAccessor<1,2>(grid): Caching of lower and upper internal nodes
5433 /// createAccessor<0,1,2>(grid): Caching of all nodes at all tree levels
5434 
5435 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5437 {
5439 }
5440 
5441 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5443 {
5445 }
5446 
5447 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5449 {
5451 }
5452 
5453 //////////////////////////////////////////////////
5454 
5455 /// @brief This is a convenient class that allows for access to grid meta-data
5456 /// that are independent of the value type of a grid. That is, this class
5457 /// can be used to get information about a grid without actually knowing
5458 /// its ValueType.
5460 { // 768 bytes (32 byte aligned)
5461  GridData mGridData; // 672B
5462  TreeData mTreeData; // 64B
5463  CoordBBox mIndexBBox; // 24B. AABB of active values in index space.
5464  uint32_t mRootTableSize, mPadding{0}; // 8B
5465 
5466 public:
5467  template<typename T>
5469  {
5470  mGridData = *grid.data();
5471  mTreeData = *grid.tree().data();
5472  mIndexBBox = grid.indexBBox();
5473  mRootTableSize = grid.tree().root().getTableSize();
5474  }
5475  GridMetaData(const GridData* gridData)
5476  {
5477  if (GridMetaData::safeCast(gridData)) {
5478  *this = *reinterpret_cast<const GridMetaData*>(gridData);
5479  //util::memcpy(this, (const GridMetaData*)gridData);
5480  } else {// otherwise copy each member individually
5481  mGridData = *gridData;
5482  mTreeData = *reinterpret_cast<const TreeData*>(gridData->treePtr());
5483  mIndexBBox = gridData->indexBBox();
5484  mRootTableSize = gridData->rootTableSize();
5485  }
5486  }
5487  GridMetaData& operator=(const GridMetaData&) = default;
5488  /// @brief return true if the RootData follows right after the TreeData.
5489  /// If so, this implies that it's safe to cast the grid from which
5490  /// this instance was constructed to a GridMetaData
5491  __hostdev__ bool safeCast() const { return mTreeData.isRootNext(); }
5492 
5493  /// @brief return true if it is safe to cast the grid to a pointer
5494  /// of type GridMetaData, i.e. construction can be avoided.
5495  __hostdev__ static bool safeCast(const GridData *gridData){
5496  NANOVDB_ASSERT(gridData && gridData->isValid());
5497  return gridData->isRootConnected();
5498  }
5499  /// @brief return true if it is safe to cast the grid to a pointer
5500  /// of type GridMetaData, i.e. construction can be avoided.
5501  template<typename T>
5502  __hostdev__ static bool safeCast(const NanoGrid<T>& grid){return grid.tree().isRootNext();}
5503  __hostdev__ bool isValid() const { return mGridData.isValid(); }
5504  __hostdev__ const GridType& gridType() const { return mGridData.mGridType; }
5505  __hostdev__ const GridClass& gridClass() const { return mGridData.mGridClass; }
5506  __hostdev__ bool isLevelSet() const { return mGridData.mGridClass == GridClass::LevelSet; }
5507  __hostdev__ bool isFogVolume() const { return mGridData.mGridClass == GridClass::FogVolume; }
5508  __hostdev__ bool isStaggered() const { return mGridData.mGridClass == GridClass::Staggered; }
5509  __hostdev__ bool isPointIndex() const { return mGridData.mGridClass == GridClass::PointIndex; }
5510  __hostdev__ bool isGridIndex() const { return mGridData.mGridClass == GridClass::IndexGrid; }
5511  __hostdev__ bool isPointData() const { return mGridData.mGridClass == GridClass::PointData; }
5512  __hostdev__ bool isMask() const { return mGridData.mGridClass == GridClass::Topology; }
5513  __hostdev__ bool isUnknown() const { return mGridData.mGridClass == GridClass::Unknown; }
5514  __hostdev__ bool hasMinMax() const { return mGridData.mFlags.isMaskOn(GridFlags::HasMinMax); }
5515  __hostdev__ bool hasBBox() const { return mGridData.mFlags.isMaskOn(GridFlags::HasBBox); }
5516  __hostdev__ bool hasLongGridName() const { return mGridData.mFlags.isMaskOn(GridFlags::HasLongGridName); }
5517  __hostdev__ bool hasAverage() const { return mGridData.mFlags.isMaskOn(GridFlags::HasAverage); }
5518  __hostdev__ bool hasStdDeviation() const { return mGridData.mFlags.isMaskOn(GridFlags::HasStdDeviation); }
5519  __hostdev__ bool isBreadthFirst() const { return mGridData.mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
5520  __hostdev__ uint64_t gridSize() const { return mGridData.mGridSize; }
5521  __hostdev__ uint32_t gridIndex() const { return mGridData.mGridIndex; }
5522  __hostdev__ uint32_t gridCount() const { return mGridData.mGridCount; }
5523  __hostdev__ const char* shortGridName() const { return mGridData.mGridName; }
5524  __hostdev__ const Map& map() const { return mGridData.mMap; }
5525  __hostdev__ const Vec3dBBox& worldBBox() const { return mGridData.mWorldBBox; }
5526  __hostdev__ const CoordBBox& indexBBox() const { return mIndexBBox; }
5527  __hostdev__ Vec3d voxelSize() const { return mGridData.mVoxelSize; }
5528  __hostdev__ int blindDataCount() const { return mGridData.mBlindMetadataCount; }
5529  __hostdev__ uint64_t activeVoxelCount() const { return mTreeData.mVoxelCount; }
5530  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const { return mTreeData.mTileCount[level - 1]; }
5531  __hostdev__ uint32_t nodeCount(uint32_t level) const { return mTreeData.mNodeCount[level]; }
5532  __hostdev__ const Checksum& checksum() const { return mGridData.mChecksum; }
5533  __hostdev__ uint32_t rootTableSize() const { return mRootTableSize; }
5534  __hostdev__ bool isEmpty() const { return mRootTableSize == 0; }
5535  __hostdev__ Version version() const { return mGridData.mVersion; }
5536 }; // GridMetaData
5537 
5538 /// @brief Class to access points at a specific voxel location
5539 ///
5540 /// @note If GridClass::PointIndex AttT should be uint32_t and if GridClass::PointData Vec3f
5541 template<typename AttT, typename BuildT = uint32_t>
5542 class PointAccessor : public DefaultReadAccessor<BuildT>
5543 {
5544  using AccT = DefaultReadAccessor<BuildT>;
5545  const NanoGrid<BuildT>& mGrid;
5546  const AttT* mData;
5547 
5548 public:
5550  : AccT(grid.tree().root())
5551  , mGrid(grid)
5552  , mData(grid.template getBlindData<AttT>(0))
5553  {
5554  NANOVDB_ASSERT(grid.gridType() == toGridType<BuildT>());
5557  }
5558 
5559  /// @brief return true if this access was initialized correctly
5560  __hostdev__ operator bool() const { return mData != nullptr; }
5561 
5562  __hostdev__ const NanoGrid<BuildT>& grid() const { return mGrid; }
5563 
5564  /// @brief Return the total number of point in the grid and set the
5565  /// iterators to the complete range of points.
5566  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
5567  {
5568  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
5569  begin = mData;
5570  end = begin + count;
5571  return count;
5572  }
5573  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
5574  /// If this return value is larger than zero then the iterators @a begin and @a end
5575  /// will point to all the attributes contained within that leaf node.
5576  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5577  {
5578  auto* leaf = this->probeLeaf(ijk);
5579  if (leaf == nullptr) {
5580  return 0;
5581  }
5582  begin = mData + leaf->minimum();
5583  end = begin + leaf->maximum();
5584  return leaf->maximum();
5585  }
5586 
5587  /// @brief get iterators over attributes to points at a specific voxel location
5588  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5589  {
5590  begin = end = nullptr;
5591  if (auto* leaf = this->probeLeaf(ijk)) {
5592  const uint32_t offset = NanoLeaf<BuildT>::CoordToOffset(ijk);
5593  if (leaf->isActive(offset)) {
5594  begin = mData + leaf->minimum();
5595  end = begin + leaf->getValue(offset);
5596  if (offset > 0u)
5597  begin += leaf->getValue(offset - 1);
5598  }
5599  }
5600  return end - begin;
5601  }
5602 }; // PointAccessor
5603 
5604 template<typename AttT>
5605 class PointAccessor<AttT, Point> : public DefaultReadAccessor<Point>
5606 {
5607  using AccT = DefaultReadAccessor<Point>;
5608  const NanoGrid<Point>& mGrid;
5609  const AttT* mData;
5610 
5611 public:
5613  : AccT(grid.tree().root())
5614  , mGrid(grid)
5615  , mData(grid.template getBlindData<AttT>(0))
5616  {
5617  NANOVDB_ASSERT(mData);
5624  }
5625 
5626  /// @brief return true if this access was initialized correctly
5627  __hostdev__ operator bool() const { return mData != nullptr; }
5628 
5629  __hostdev__ const NanoGrid<Point>& grid() const { return mGrid; }
5630 
5631  /// @brief Return the total number of point in the grid and set the
5632  /// iterators to the complete range of points.
5633  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
5634  {
5635  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
5636  begin = mData;
5637  end = begin + count;
5638  return count;
5639  }
5640  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
5641  /// If this return value is larger than zero then the iterators @a begin and @a end
5642  /// will point to all the attributes contained within that leaf node.
5643  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5644  {
5645  auto* leaf = this->probeLeaf(ijk);
5646  if (leaf == nullptr)
5647  return 0;
5648  begin = mData + leaf->offset();
5649  end = begin + leaf->pointCount();
5650  return leaf->pointCount();
5651  }
5652 
5653  /// @brief get iterators over attributes to points at a specific voxel location
5654  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5655  {
5656  if (auto* leaf = this->probeLeaf(ijk)) {
5657  const uint32_t n = NanoLeaf<Point>::CoordToOffset(ijk);
5658  if (leaf->isActive(n)) {
5659  begin = mData + leaf->first(n);
5660  end = mData + leaf->last(n);
5661  return end - begin;
5662  }
5663  }
5664  begin = end = nullptr;
5665  return 0u; // no leaf or inactive voxel
5666  }
5667 }; // PointAccessor<AttT, Point>
5668 
5669 /// @brief Class to access values in channels at a specific voxel location.
5670 ///
5671 /// @note The ChannelT template parameter can be either const and non-const.
5672 template<typename ChannelT, typename IndexT = ValueIndex>
5673 class ChannelAccessor : public DefaultReadAccessor<IndexT>
5674 {
5675  static_assert(BuildTraits<IndexT>::is_index, "Expected an index build type");
5677 
5678  const NanoGrid<IndexT>& mGrid;
5679  ChannelT* mChannel;
5680 
5681 public:
5682  using ValueType = ChannelT;
5685 
5686  /// @brief Ctor from an IndexGrid and an integer ID of an internal channel
5687  /// that is assumed to exist as blind data in the IndexGrid.
5688  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, uint32_t channelID = 0u)
5689  : BaseT(grid.tree().root())
5690  , mGrid(grid)
5691  , mChannel(nullptr)
5692  {
5693  NANOVDB_ASSERT(isIndex(grid.gridType()));
5695  this->setChannel(channelID);
5696  }
5697 
5698  /// @brief Ctor from an IndexGrid and an external channel
5699  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, ChannelT* channelPtr)
5700  : BaseT(grid.tree().root())
5701  , mGrid(grid)
5702  , mChannel(channelPtr)
5703  {
5704  NANOVDB_ASSERT(isIndex(grid.gridType()));
5706  }
5707 
5708  /// @brief return true if this access was initialized correctly
5709  __hostdev__ operator bool() const { return mChannel != nullptr; }
5710 
5711  /// @brief Return a const reference to the IndexGrid
5712  __hostdev__ const NanoGrid<IndexT>& grid() const { return mGrid; }
5713 
5714  /// @brief Return a const reference to the tree of the IndexGrid
5715  __hostdev__ const TreeType& tree() const { return mGrid.tree(); }
5716 
5717  /// @brief Return a vector of the axial voxel sizes
5718  __hostdev__ const Vec3d& voxelSize() const { return mGrid.voxelSize(); }
5719 
5720  /// @brief Return total number of values indexed by the IndexGrid
5721  __hostdev__ const uint64_t& valueCount() const { return mGrid.valueCount(); }
5722 
5723  /// @brief Change to an external channel
5724  /// @return Pointer to channel data
5725  __hostdev__ ChannelT* setChannel(ChannelT* channelPtr) {return mChannel = channelPtr;}
5726 
5727  /// @brief Change to an internal channel, assuming it exists as as blind data
5728  /// in the IndexGrid.
5729  /// @return Pointer to channel data, which could be NULL if channelID is out of range or
5730  /// if ChannelT does not match the value type of the blind data
5731  __hostdev__ ChannelT* setChannel(uint32_t channelID)
5732  {
5733  return mChannel = const_cast<ChannelT*>(mGrid.template getBlindData<ChannelT>(channelID));
5734  }
5735 
5736  /// @brief Return the linear offset into a channel that maps to the specified coordinate
5737  __hostdev__ uint64_t getIndex(const math::Coord& ijk) const { return BaseT::getValue(ijk); }
5738  __hostdev__ uint64_t idx(int i, int j, int k) const { return BaseT::getValue(math::Coord(i, j, k)); }
5739 
5740  /// @brief Return the value from a cached channel that maps to the specified coordinate
5741  __hostdev__ ChannelT& getValue(const math::Coord& ijk) const { return mChannel[BaseT::getValue(ijk)]; }
5742  __hostdev__ ChannelT& operator()(const math::Coord& ijk) const { return this->getValue(ijk); }
5743  __hostdev__ ChannelT& operator()(int i, int j, int k) const { return this->getValue(math::Coord(i, j, k)); }
5744 
5745  /// @brief return the state and updates the value of the specified voxel
5746  __hostdev__ bool probeValue(const math::Coord& ijk, typename util::remove_const<ChannelT>::type& v) const
5747  {
5748  uint64_t idx;
5749  const bool isActive = BaseT::probeValue(ijk, idx);
5750  v = mChannel[idx];
5751  return isActive;
5752  }
5753  /// @brief Return the value from a specified channel that maps to the specified coordinate
5754  ///
5755  /// @note The template parameter can be either const or non-const
5756  template<typename T>
5757  __hostdev__ T& getValue(const math::Coord& ijk, T* channelPtr) const { return channelPtr[BaseT::getValue(ijk)]; }
5758 
5759 }; // ChannelAccessor
5760 
5761 #if 0
5762 // This MiniGridHandle class is only included as a stand-alone example. Note that aligned_alloc is a C++17 feature!
5763 // Normally we recommend using GridHandle defined in util/GridHandle.h but this minimal implementation could be an
5764 // alternative when using the IO methods defined below.
5765 struct MiniGridHandle {
5766  struct BufferType {
5767  uint8_t *data;
5768  uint64_t size;
5769  BufferType(uint64_t n=0) : data(std::aligned_alloc(NANOVDB_DATA_ALIGNMENT, n)), size(n) {assert(isValid(data));}
5770  BufferType(BufferType &&other) : data(other.data), size(other.size) {other.data=nullptr; other.size=0;}
5771  ~BufferType() {std::free(data);}
5772  BufferType& operator=(const BufferType &other) = delete;
5773  BufferType& operator=(BufferType &&other){data=other.data; size=other.size; other.data=nullptr; other.size=0; return *this;}
5774  static BufferType create(size_t n, BufferType* dummy = nullptr) {return BufferType(n);}
5775  } buffer;
5776  MiniGridHandle(BufferType &&buf) : buffer(std::move(buf)) {}
5777  const uint8_t* data() const {return buffer.data;}
5778 };// MiniGridHandle
5779 #endif
5780 
5781 namespace io {
5782 
5783 /// @brief Define compression codecs
5784 ///
5785 /// @note NONE is the default, ZIP is slow but compact and BLOSC offers a great balance.
5786 ///
5787 /// @throw NanoVDB optionally supports ZIP and BLOSC compression and will throw an exception
5788 /// if its support is required but missing.
5789 enum class Codec : uint16_t { NONE = 0,
5790  ZIP = 1,
5791  BLOSC = 2,
5792  End = 3,
5793  StrLen = 6 + End };
5794 
5795 __hostdev__ inline const char* toStr(char *dst, Codec codec)
5796 {
5797  switch (codec){
5798  case Codec::NONE: return util::strcpy(dst, "NONE");
5799  case Codec::ZIP: return util::strcpy(dst, "ZIP");
5800  case Codec::BLOSC : return util::strcpy(dst, "BLOSC");// StrLen = 5 + 1 + End
5801  default: return util::strcpy(dst, "END");
5802  }
5803 }
5804 
5805 __hostdev__ inline Codec toCodec(const char *str)
5806 {
5807  if (util::streq(str, "none")) return Codec::NONE;
5808  if (util::streq(str, "zip")) return Codec::ZIP;
5809  if (util::streq(str, "blosc")) return Codec::BLOSC;
5810  return Codec::End;
5811 }
5812 
5813 /// @brief Data encoded at the head of each segment of a file or stream.
5814 ///
5815 /// @note A file or stream is composed of one or more segments that each contain
5816 // one or more grids.
5817 struct FileHeader {// 16 bytes
5818  uint64_t magic;// 8 bytes
5819  Version version;// 4 bytes version numbers
5820  uint16_t gridCount;// 2 bytes
5821  Codec codec;// 2 bytes
5822  bool isValid() const {return magic == NANOVDB_MAGIC_NUMB || magic == NANOVDB_MAGIC_FILE;}
5823 }; // FileHeader ( 16 bytes = 2 words )
5824 
5825 // @brief Data encoded for each of the grids associated with a segment.
5826 // Grid size in memory (uint64_t) |
5827 // Grid size on disk (uint64_t) |
5828 // Grid name hash key (uint64_t) |
5829 // Numer of active voxels (uint64_t) |
5830 // Grid type (uint32_t) |
5831 // Grid class (uint32_t) |
5832 // Characters in grid name (uint32_t) |
5833 // AABB in world space (2*3*double) | one per grid in file
5834 // AABB in index space (2*3*int) |
5835 // Size of a voxel in world units (3*double) |
5836 // Byte size of the grid name (uint32_t) |
5837 // Number of nodes per level (4*uint32_t) |
5838 // Numer of active tiles per level (3*uint32_t) |
5839 // Codec for file compression (uint16_t) |
5840 // Padding due to 8B alignment (uint16_t) |
5841 // Version number (uint32_t) |
5843 {// 176 bytes
5844  uint64_t gridSize, fileSize, nameKey, voxelCount; // 4 * 8 = 32B.
5847  Vec3dBBox worldBBox; // 2 * 3 * 8 = 48B.
5848  CoordBBox indexBBox; // 2 * 3 * 4 = 24B.
5849  Vec3d voxelSize; // 24B.
5850  uint32_t nameSize; // 4B.
5851  uint32_t nodeCount[4]; //4 x 4 = 16B
5852  uint32_t tileCount[3];// 3 x 4 = 12B
5853  Codec codec; // 2B
5854  uint16_t padding;// 2B, due to 8B alignment from uint64_t
5856 }; // FileMetaData
5857 
5858 // the following code block uses std and therefore needs to be ignored by CUDA and HIP
5859 #if !defined(__CUDA_ARCH__) && !defined(__HIP__)
5860 
5861 // Note that starting with version 32.6.0 it is possible to write and read raw grid buffers to
5862 // files, e.g. os.write((const char*)&buffer.data(), buffer.size()) or more conveniently as
5863 // handle.write(fileName). In addition to this simple approach we offer the methods below to
5864 // write traditional uncompressed nanovdb files that unlike raw files include metadata that
5865 // is used for tools like nanovdb_print.
5866 
5867 ///
5868 /// @brief This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h
5869 /// Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also
5870 /// works if client code only includes PNanoVDB.h!
5871 ///
5872 /// @details Writes a raw NanoVDB buffer, possibly with multiple grids, to a stream WITHOUT compression.
5873 /// It follows all the conventions in util/IO.h so the stream can be read by all existing client
5874 /// code of NanoVDB.
5875 ///
5876 /// @note This method will always write uncompressed grids to the stream, i.e. Blosc or ZIP compression
5877 /// is never applied! This is a fundamental limitation and feature of this standalone function.
5878 ///
5879 /// @throw std::invalid_argument if buffer does not point to a valid NanoVDB grid.
5880 ///
5881 /// @warning This is pretty ugly code that involves lots of pointer and bit manipulations - not for the faint of heart :)
5882 template<typename StreamT> // StreamT class must support: "void write(const char*, size_t)"
5883 void writeUncompressedGrid(StreamT& os, const GridData* gridData, bool raw = false)
5884 {
5885  NANOVDB_ASSERT(gridData->mMagic == NANOVDB_MAGIC_NUMB || gridData->mMagic == NANOVDB_MAGIC_GRID);
5886  NANOVDB_ASSERT(gridData->mVersion.isCompatible());
5887  if (!raw) {// segment with a single grid: FileHeader, FileMetaData, gridName, Grid
5888 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
5889  FileHeader head{NANOVDB_MAGIC_FILE, gridData->mVersion, 1u, Codec::NONE};
5890 #else
5891  FileHeader head{NANOVDB_MAGIC_NUMB, gridData->mVersion, 1u, Codec::NONE};
5892 #endif
5893  const char* gridName = gridData->gridName();
5894  const uint32_t nameSize = util::strlen(gridName) + 1;// include '\0'
5895  const TreeData* treeData = (const TreeData*)(gridData->treePtr());
5896  FileMetaData meta{gridData->mGridSize, gridData->mGridSize, 0u, treeData->mVoxelCount,
5897  gridData->mGridType, gridData->mGridClass, gridData->mWorldBBox,
5898  treeData->bbox(), gridData->mVoxelSize, nameSize,
5899  {treeData->mNodeCount[0], treeData->mNodeCount[1], treeData->mNodeCount[2], 1u},
5900  {treeData->mTileCount[0], treeData->mTileCount[1], treeData->mTileCount[2]},
5901  Codec::NONE, 0u, gridData->mVersion }; // FileMetaData
5902  os.write((const char*)&head, sizeof(FileHeader)); // write header
5903  os.write((const char*)&meta, sizeof(FileMetaData)); // write meta data
5904  os.write(gridName, nameSize); // write grid name
5905  }
5906  os.write((const char*)gridData, gridData->mGridSize);// write the grid
5907 }// writeUncompressedGrid
5908 
5909 /// @brief write multiple NanoVDB grids to a single file, without compression.
5910 /// @note To write all grids in a single GridHandle simply use handle.write("fieNane")
5911 template<typename GridHandleT, template<typename...> class VecT>
5912 void writeUncompressedGrids(const char* fileName, const VecT<GridHandleT>& handles, bool raw = false)
5913 {
5914 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ofstream or FILE implementations
5915  std::ofstream os(fileName, std::ios::out | std::ios::binary | std::ios::trunc);
5916 #else
5917  struct StreamT {
5918  FILE* fptr;
5919  StreamT(const char* name) { fptr = fopen(name, "wb"); }
5920  ~StreamT() { fclose(fptr); }
5921  void write(const char* data, size_t n) { fwrite(data, 1, n, fptr); }
5922  bool is_open() const { return fptr != NULL; }
5923  } os(fileName);
5924 #endif
5925  if (!os.is_open()) {
5926  fprintf(stderr, "nanovdb::writeUncompressedGrids: Unable to open file \"%s\"for output\n", fileName);
5927  exit(EXIT_FAILURE);
5928  }
5929  for (auto& h : handles) {
5930  for (uint32_t n=0; n<h.gridCount(); ++n) writeUncompressedGrid(os, h.gridData(n), raw);
5931  }
5932 } // writeUncompressedGrids
5933 
5934 /// @brief read all uncompressed grids from a stream and return their handles.
5935 ///
5936 /// @throw std::invalid_argument if stream does not contain a single uncompressed valid NanoVDB grid
5937 ///
5938 /// @details StreamT class must support: "bool read(char*, size_t)" and "void skip(uint32_t)"
5939 template<typename GridHandleT, typename StreamT, template<typename...> class VecT>
5940 VecT<GridHandleT> readUncompressedGrids(StreamT& is, const typename GridHandleT::BufferType& pool = typename GridHandleT::BufferType())
5941 {
5942  VecT<GridHandleT> handles;
5943  GridData data;
5944  is.read((char*)&data, sizeof(GridData));
5945  if (data.isValid()) {// stream contains a raw grid buffer
5946  uint64_t size = data.mGridSize, sum = 0u;
5947  while(data.mGridIndex + 1u < data.mGridCount) {
5948  is.skip(data.mGridSize - sizeof(GridData));// skip grid
5949  is.read((char*)&data, sizeof(GridData));// read sizeof(GridData) bytes
5950  sum += data.mGridSize;
5951  }
5952  is.skip(-int64_t(sum + sizeof(GridData)));// rewind to start
5953  auto buffer = GridHandleT::BufferType::create(size + sum, &pool);
5954  is.read((char*)(buffer.data()), buffer.size());
5955  handles.emplace_back(std::move(buffer));
5956  } else {// Header0, MetaData0, gridName0, Grid0...HeaderN, MetaDataN, gridNameN, GridN
5957  is.skip(-sizeof(GridData));// rewind
5958  FileHeader head;
5959  while(is.read((char*)&head, sizeof(FileHeader))) {
5960  if (!head.isValid()) {
5961  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid magic number = \"%s\"\n", (const char*)&(head.magic));
5962  exit(EXIT_FAILURE);
5963  } else if (!head.version.isCompatible()) {
5964  char str[20];
5965  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid major version = \"%s\"\n", toStr(str, head.version));
5966  exit(EXIT_FAILURE);
5967  } else if (head.codec != Codec::NONE) {
5968  char str[8];
5969  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid codec = \"%s\"\n", toStr(str, head.codec));
5970  exit(EXIT_FAILURE);
5971  }
5972  FileMetaData meta;
5973  for (uint16_t i = 0; i < head.gridCount; ++i) { // read all grids in segment
5974  is.read((char*)&meta, sizeof(FileMetaData));// read meta data
5975  is.skip(meta.nameSize); // skip grid name
5976  auto buffer = GridHandleT::BufferType::create(meta.gridSize, &pool);
5977  is.read((char*)buffer.data(), meta.gridSize);// read grid
5978  handles.emplace_back(std::move(buffer));
5979  }// loop over grids in segment
5980  }// loop over segments
5981  }
5982  return handles;
5983 } // readUncompressedGrids
5984 
5985 /// @brief Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
5986 template<typename GridHandleT, template<typename...> class VecT>
5987 VecT<GridHandleT> readUncompressedGrids(const char* fileName, const typename GridHandleT::BufferType& buffer = typename GridHandleT::BufferType())
5988 {
5989 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ifstream or FILE implementations
5990  struct StreamT : public std::ifstream {
5991  StreamT(const char* name) : std::ifstream(name, std::ios::in | std::ios::binary){}
5992  void skip(int64_t off) { this->seekg(off, std::ios_base::cur); }
5993  };
5994 #else
5995  struct StreamT {
5996  FILE* fptr;
5997  StreamT(const char* name) { fptr = fopen(name, "rb"); }
5998  ~StreamT() { fclose(fptr); }
5999  bool read(char* data, size_t n) {
6000  size_t m = fread(data, 1, n, fptr);
6001  return n == m;
6002  }
6003  void skip(int64_t off) { fseek(fptr, (long int)off, SEEK_CUR); }
6004  bool is_open() const { return fptr != NULL; }
6005  };
6006 #endif
6007  StreamT is(fileName);
6008  if (!is.is_open()) {
6009  fprintf(stderr, "nanovdb::readUncompressedGrids: Unable to open file \"%s\"for input\n", fileName);
6010  exit(EXIT_FAILURE);
6011  }
6012  return readUncompressedGrids<GridHandleT, StreamT, VecT>(is, buffer);
6013 } // readUncompressedGrids
6014 
6015 #endif // if !defined(__CUDA_ARCH__) && !defined(__HIP__)
6016 
6017 } // namespace io
6018 
6019 // ----------------------------> Implementations of random access methods <--------------------------------------
6020 
6021 /**
6022 * @brief Below is an example of a struct used for random get methods.
6023 * @note All member methods, data, and types are mandatory.
6024 * @code
6025  template<typename BuildT>
6026  struct GetOpT {
6027  using Type = typename BuildToValueMap<BuildT>::Type;// return type
6028  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6029  __hostdev__ static Type get(const NanoRoot<BuildT>& root, args...) { }
6030  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile, args...) { }
6031  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n, args...) { }
6032  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n, args...) { }
6033  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n, args...) { }
6034  };
6035  @endcode
6036 
6037  * @brief Below is an example of the struct used for random set methods
6038  * @note All member methods and data are mandatory.
6039  * @code
6040  template<typename BuildT>
6041  struct SetOpT {
6042  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6043  __hostdev__ static void set(NanoRoot<BuildT>& root, args...) { }
6044  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile& tile, args...) { }
6045  __hostdev__ static void set(NanoUpper<BuildT>& node, uint32_t n, args...) { }
6046  __hostdev__ static void set(NanoLower<BuildT>& node, uint32_t n, args...) { }
6047  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, args...) { }
6048  };
6049  @endcode
6050 **/
6051 
6052 /// @brief Implements Tree::getValue(math::Coord), i.e. return the value associated with a specific coordinate @c ijk.
6053 /// @tparam BuildT Build type of the grid being called
6054 /// @details The value at a coordinate either maps to the background, a tile value or a leaf value.
6055 template<typename BuildT>
6056 struct GetValue
6057 {
6059  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6060  __hostdev__ static Type get(const NanoRoot<BuildT>& root) { return root.mBackground; }
6061  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.value; }
6062  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6063  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6064  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.getValue(n); } // works with all build types
6065 }; // GetValue<BuildT>
6066 
6067 template<typename BuildT>
6068 struct SetValue
6069 {
6070  static_assert(!BuildTraits<BuildT>::is_special, "SetValue does not support special value types, e.g. Fp4, Fp8, Fp16, FpN");
6072  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6073  __hostdev__ static void set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6074  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile& tile, const ValueT& v) { tile.value = v; }
6075  __hostdev__ static void set(NanoUpper<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6076  __hostdev__ static void set(NanoLower<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6077  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6078 }; // SetValue<BuildT>
6079 
6080 template<typename BuildT>
6081 struct SetVoxel
6082 {
6083  static_assert(!BuildTraits<BuildT>::is_special, "SetVoxel does not support special value types. e.g. Fp4, Fp8, Fp16, FpN");
6085  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6086  __hostdev__ static void set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6087  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile&, const ValueT&) {} // no-op
6088  __hostdev__ static void set(NanoUpper<BuildT>&, uint32_t, const ValueT&) {} // no-op
6089  __hostdev__ static void set(NanoLower<BuildT>&, uint32_t, const ValueT&) {} // no-op
6090  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6091 }; // SetVoxel<BuildT>
6092 
6093 /// @brief Implements Tree::isActive(math::Coord)
6094 /// @tparam BuildT Build type of the grid being called
6095 template<typename BuildT>
6096 struct GetState
6097 {
6098  using Type = bool;
6099  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6100  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return false; }
6101  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.state > 0; }
6102  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6103  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6104  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.mValueMask.isOn(n); }
6105 }; // GetState<BuildT>
6106 
6107 /// @brief Implements Tree::getDim(math::Coord)
6108 /// @tparam BuildT Build type of the grid being called
6109 template<typename BuildT>
6110 struct GetDim
6111 {
6112  using Type = uint32_t;
6113  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6114  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return 0u; } // background
6115  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return 4096u; }
6116  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return 128u; }
6117  __hostdev__ static Type get(const NanoLower<BuildT>&, uint32_t) { return 8u; }
6118  __hostdev__ static Type get(const NanoLeaf<BuildT>&, uint32_t) { return 1u; }
6119 }; // GetDim<BuildT>
6120 
6121 /// @brief Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord)
6122 /// @tparam BuildT Build type of the grid being called
6123 template<typename BuildT>
6124 struct GetLeaf
6125 {
6126  using Type = const NanoLeaf<BuildT>*;
6127  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6128  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6129  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6130  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6131  __hostdev__ static Type get(const NanoLower<BuildT>&, uint32_t) { return nullptr; }
6132  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t) { return &leaf; }
6133 }; // GetLeaf<BuildT>
6134 
6135 /// @brief Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates
6136 /// @tparam BuildT Build type of the grid being called
6137 template<typename BuildT>
6138 struct GetLower
6139 {
6140  using Type = const NanoLower<BuildT>*;
6141  static constexpr int LEVEL = 1;// minimum level for the descent during top-down traversal
6142  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6143  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6144  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6145  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t) { return &node; }
6146 }; // GetLower<BuildT>
6147 
6148 /// @brief Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates
6149 /// @tparam BuildT Build type of the grid being called
6150 template<typename BuildT>
6151 struct GetUpper
6152 {
6153  using Type = const NanoUpper<BuildT>*;
6154  static constexpr int LEVEL = 2;// minimum level for the descent during top-down traversal
6155  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6156  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6157  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t) { return &node; }
6158 }; // GetUpper<BuildT>
6159 
6160 /// @brief Return point to the root Tile where math::Coord maps to one of its values, i.e. terminates
6161 /// @tparam BuildT Build type of the grid being called
6162 template<typename BuildT>
6163 struct GetTile
6164 {
6165  using Type = const typename NanoRoot<BuildT>::Tile*;
6166  static constexpr int LEVEL = 3;// minimum level for the descent during top-down traversal
6167  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6168  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile &tile) { return &tile; }
6169 }; // GetTile<BuildT>
6170 
6171 /// @brief Implements Tree::probeLeaf(math::Coord)
6172 /// @tparam BuildT Build type of the grid being called
6173 template<typename BuildT>
6174 struct ProbeValue
6175 {
6176  using Type = bool;
6177  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6179  __hostdev__ static Type get(const NanoRoot<BuildT>& root, ValueT& v)
6180  {
6181  v = root.mBackground;
6182  return false;
6183  }
6184  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile, ValueT& v)
6185  {
6186  v = tile.value;
6187  return tile.state > 0u;
6188  }
6189  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n, ValueT& v)
6190  {
6191  v = node.mTable[n].value;
6192  return node.mValueMask.isOn(n);
6193  }
6194  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n, ValueT& v)
6195  {
6196  v = node.mTable[n].value;
6197  return node.mValueMask.isOn(n);
6198  }
6199  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n, ValueT& v)
6200  {
6201  v = leaf.getValue(n);
6202  return leaf.mValueMask.isOn(n);
6203  }
6204 }; // ProbeValue<BuildT>
6205 
6206 /// @brief Implements Tree::getNodeInfo(math::Coord)
6207 /// @tparam BuildT Build type of the grid being called
6208 template<typename BuildT>
6209 struct GetNodeInfo
6210 {
6213  struct NodeInfo
6214  {
6215  uint32_t level, dim;
6216  ValueType minimum, maximum;
6217  FloatType average, stdDevi;
6218  CoordBBox bbox;
6219  };
6220  static constexpr int LEVEL = 0;
6221  using Type = NodeInfo;
6222  __hostdev__ static Type get(const NanoRoot<BuildT>& root)
6223  {
6224  return NodeInfo{3u, NanoUpper<BuildT>::DIM, root.minimum(), root.maximum(), root.average(), root.stdDeviation(), root.bbox()};
6225  }
6226  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile)
6227  {
6228  return NodeInfo{3u, NanoUpper<BuildT>::DIM, tile.value, tile.value, static_cast<FloatType>(tile.value), 0, CoordBBox::createCube(tile.origin(), NanoUpper<BuildT>::DIM)};
6229  }
6230  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n)
6231  {
6232  return NodeInfo{2u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6233  }
6234  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n)
6235  {
6236  return NodeInfo{1u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6237  }
6238  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n)
6239  {
6240  return NodeInfo{0u, leaf.dim(), leaf.minimum(), leaf.maximum(), leaf.average(), leaf.stdDeviation(), leaf.bbox()};
6241  }
6242 }; // GetNodeInfo<BuildT>
6243 
6244 } // namespace nanovdb ===================================================================
6245 
6246 #endif // end of NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
typename FloatTraits< BuildT >::FloatType FloatType
Definition: NanoVDB.h:3622
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:3657
__hostdev__ ValueOffIterator beginValueOff() const
Definition: NanoVDB.h:4298
__hostdev__ DenseIter()
Definition: NanoVDB.h:2955
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:2228
__hostdev__ bool probeValue(const math::Coord &ijk, typename util::remove_const< ChannelT >::type &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:5746
__hostdev__ ValueT value() const
Definition: NanoVDB.h:2718
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3777
typename BuildT::RootType RootType
Definition: NanoVDB.h:2103
__hostdev__ const Vec3d & voxelSize() const
Return a const reference to the size of a voxel in world units.
Definition: NanoVDB.h:2163
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5336
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1075
ValueT ValueType
Definition: NanoVDB.h:4905
__hostdev__ uint64_t full() const
Definition: NanoVDB.h:1827
__hostdev__ const char * shortGridName() const
Return a c-string with the name of this grid, truncated to 255 characters.
Definition: NanoVDB.h:2262
__hostdev__ util::enable_if<!util::is_same< MaskT, Mask >::value, Mask & >::type operator=(const MaskT &other)
Assignment operator that works with openvdb::util::NodeMask.
Definition: NanoVDB.h:1172
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:3010
bool type
Definition: NanoVDB.h:494
Visits all tile values in this node, i.e. both inactive and active tiles.
Definition: NanoVDB.h:3310
__hostdev__ math::BBox< CoordT > bbox() const
Return the bounding box in index space of active values in this leaf node.
Definition: NanoVDB.h:4411
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4325
uint16_t ArrayType
Definition: NanoVDB.h:4152
__hostdev__ CheckMode toCheckMode(const Checksum &checksum)
Maps 64 bit checksum to CheckMode enum.
Definition: NanoVDB.h:1866
C++11 implementation of std::enable_if.
Definition: Util.h:335
FloatType mStdDevi
Definition: NanoVDB.h:3634
float type
Definition: NanoVDB.h:501
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3873
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5342
__hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:4402
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3997
__hostdev__ bool isEmpty() const
Definition: NanoVDB.h:5534
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4847
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:4367
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:5511
typename util::match_const< DataType, RootT >::type DataT
Definition: NanoVDB.h:2830
void writeUncompressedGrids(const char *fileName, const VecT< GridHandleT > &handles, bool raw=false)
write multiple NanoVDB grids to a single file, without compression.
Definition: NanoVDB.h:5912
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2407
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:3035
Definition: NanoVDB.h:5842
__hostdev__ Vec3d getVoxelSize() const
Return a voxels size in each coordinate direction, measured at the origin.
Definition: NanoVDB.h:1511
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4818
StatsT mStdDevi
Definition: NanoVDB.h:3155
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:2242
__hostdev__ const Vec3dBBox & worldBBox() const
Definition: NanoVDB.h:5525
__hostdev__ Vec3T applyMap(const Vec3T &xyz) const
Definition: NanoVDB.h:1978
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findFirst() const
Definition: NanoVDB.h:1333
__hostdev__ TileT * tile() const
Definition: NanoVDB.h:2839
__hostdev__ bool isOff(uint32_t n) const
Return true if the given bit is NOT set.
Definition: NanoVDB.h:1201
DataType::template TileIter< DataT > mTileIter
Definition: NanoVDB.h:2832
__hostdev__ Vec3T applyMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:1989
__hostdev__ const char * gridName() const
Definition: NanoVDB.h:2046
__hostdev__ ChannelT * setChannel(ChannelT *channelPtr)
Change to an external channel.
Definition: NanoVDB.h:5725
GridBlindDataClass mDataClass
Definition: NanoVDB.h:1556
typename util::match_const< Tile, RootT >::type TileT
Definition: NanoVDB.h:2831
__hostdev__ ChildT * getChild(uint32_t n)
Returns a pointer to the child node at the specifed linear offset.
Definition: NanoVDB.h:3183
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5338
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1508
VDB Tree, which is a thin wrapper around a RootNode.
Definition: NanoVDB.h:2394
__hostdev__ Vec3T applyMapF(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1439
decltype(mFlags) Type
Definition: NanoVDB.h:926
__hostdev__ Vec3T indexToWorld(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2174
math::BBox< CoordType > BBoxType
Definition: NanoVDB.h:2819
__hostdev__ Tile * tile(uint32_t n)
Definition: NanoVDB.h:2651
__hostdev__ DenseIter operator++(int)
Definition: NanoVDB.h:2969
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2893
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5339
__hostdev__ GridClass mapToGridClass(GridClass defaultClass=GridClass::Unknown)
Definition: NanoVDB.h:889
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2633
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:2439
__hostdev__ ValueIterator()
Definition: NanoVDB.h:4308
float Type
Definition: NanoVDB.h:521
float FloatType
Definition: NanoVDB.h:3702
__hostdev__ CoordT origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:4387
Highest level of the data structure. Contains a tree and a world->index transform (that currently onl...
Definition: NanoVDB.h:2099
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4824
__hostdev__ ValueOnIter(RootT *parent)
Definition: NanoVDB.h:2922
Vec3dBBox mWorldBBox
Definition: NanoVDB.h:1906
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3295
__hostdev__ const NodeTrait< RootT, 1 >::type * getFirstLower() const
Definition: NanoVDB.h:2533
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1997
FloatType stdDevi
Definition: NanoVDB.h:6217
__hostdev__ char * toStr(char *dst, GridType gridType)
Maps a GridType to a c-string.
Definition: NanoVDB.h:254
__hostdev__ ValueType maximum() const
Return a const reference to the maximum active value encoded in this leaf node.
Definition: NanoVDB.h:4373
__hostdev__ DenseIterator(const InternalNode *parent)
Definition: NanoVDB.h:3394
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4126
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2994
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this internal node.
Definition: NanoVDB.h:3444
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5134
#define NANOVDB_PATCH_VERSION_NUMBER
Definition: NanoVDB.h:148
__hostdev__ void init(std::initializer_list< GridFlags > list={GridFlags::IsBreadthFirst}, uint64_t gridSize=0u, const Map &map=Map(), GridType gridType=GridType::Unknown, GridClass gridClass=GridClass::Unknown)
Definition: NanoVDB.h:1918
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4956
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3776
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:4004
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2703
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &v)
Sets the value at the specified location but leaves its state unchanged.
Definition: NanoVDB.h:4457
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1465
__hostdev__ ValueOnIter()
Definition: NanoVDB.h:2921
Class to access values in channels at a specific voxel location.
Definition: NanoVDB.h:5673
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:4139
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3655
Definition: NanoVDB.h:2658
static __hostdev__ uint32_t padding()
Definition: NanoVDB.h:4427
typename GridT::TreeType Type
Definition: NanoVDB.h:2380
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:2859
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5337
char mGridName[MaxNameSize]
Definition: NanoVDB.h:1904
__hostdev__ bool operator>(const Version &rhs) const
Definition: NanoVDB.h:698
static __hostdev__ size_t memUsage(uint32_t bitWidth)
Definition: NanoVDB.h:3881
__hostdev__ void setChild(const CoordType &k, const void *ptr, const RootData *data)
Definition: NanoVDB.h:2619
__hostdev__ Version version() const
Definition: NanoVDB.h:2121
PointAccessor(const NanoGrid< Point > &grid)
Definition: NanoVDB.h:5612
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:2784
const GridBlindMetaData & operator=(const GridBlindMetaData &rhs)
Copy assignment operator that resets mDataOffset and copies mName.
Definition: NanoVDB.h:1599
__hostdev__ ValueType getValue(uint32_t i) const
Definition: NanoVDB.h:3648
__hostdev__ Map(double s, const Vec3d &t=Vec3d(0.0, 0.0, 0.0))
Definition: NanoVDB.h:1399
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:3493
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:3249
__hostdev__ void setLongGridNameOn(bool on=true)
Definition: NanoVDB.h:1967
__hostdev__ Mask(const Mask &other)
Copy constructor.
Definition: NanoVDB.h:1145
static __hostdev__ uint32_t CoordToOffset(const CoordT &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:4485
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4101
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:4125
__hostdev__ const BlindDataT * getBlindData(uint32_t n) const
Definition: NanoVDB.h:2292
#define NANOVDB_MAGIC_NUMB
Definition: NanoVDB.h:139
__hostdev__ void setWord(WordT w, uint32_t n)
Definition: NanoVDB.h:1163
GridClass
Classes (superset of OpenVDB) that are currently supported by NanoVDB.
Definition: NanoVDB.h:291
typename DataType::ValueT ValueType
Definition: NanoVDB.h:2814
uint64_t magic
Definition: NanoVDB.h:5818
__hostdev__ bool isPartial() const
return true if the 64 bit checksum is partial, i.e. of head only
Definition: NanoVDB.h:1836
static T scalar(const T &s)
Definition: NanoVDB.h:733
typename RootT::BuildType BuildType
Definition: NanoVDB.h:2409
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4014
Definition: NanoVDB.h:2881
__hostdev__ void * treePtr()
Definition: NanoVDB.h:2000
uint32_t state
Definition: NanoVDB.h:2639
BuildT BuildType
Definition: NanoVDB.h:3621
__hostdev__ void setDev(const FloatType &v)
Definition: NanoVDB.h:3672
__hostdev__ ConstTileIterator cbeginTile() const
Definition: NanoVDB.h:2729
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2406
Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord...
Definition: NanoVDB.h:1755
__hostdev__ bool getDev() const
Definition: NanoVDB.h:4008
__hostdev__ bool isValid(GridType gridType, GridClass gridClass)
return true if the combination of GridType and GridClass is valid.
Definition: NanoVDB.h:608
static __hostdev__ bool isAligned(const void *p)
return true if the specified pointer is 32 byte aligned
Definition: NanoVDB.h:542
__hostdev__ void * getRoot()
Get a non-const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2356
__hostdev__ const CoordBBox & indexBBox() const
Definition: NanoVDB.h:5526
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:3306
uint8_t mFlags
Definition: NanoVDB.h:3628
__hostdev__ TileT * operator->() const
Definition: NanoVDB.h:2688
__hostdev__ LeafNodeType * getFirstLeaf()
Template specializations of getFirstNode.
Definition: NanoVDB.h:2530
uint64_t mOffset
Definition: NanoVDB.h:4160
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3678
__hostdev__ ValueIter(RootT *parent)
Definition: NanoVDB.h:2888
static int64_t PtrDiff(const void *p, const void *q)
Compute the distance, in bytes, between two pointers, dist = p - q.
Definition: Util.h:464
__hostdev__ uint32_t gridIndex() const
Return index of this grid in the buffer.
Definition: NanoVDB.h:2134
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:2433
__hostdev__ bool isEmpty() const
Return true if the root is empty, i.e. has not child nodes or constant tiles.
Definition: NanoVDB.h:2365
Definition: NanoVDB.h:2089
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:2786
LeafNodeType Node0
Definition: NanoVDB.h:2416
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3034
Checksum mChecksum
Definition: NanoVDB.h:1898
Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6151
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:2229
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:5576
typename DataType::StatsT FloatType
Definition: NanoVDB.h:2815
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this root node and any of its child nodes...
Definition: NanoVDB.h:3019
Below is an example of a struct used for random get methods.
Definition: NanoVDB.h:1745
BitFlags()
Definition: NanoVDB.h:927
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:3659
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:2877
__hostdev__ bool isActive(const CoordT &ijk) const
Return true if the voxel value at the given coordinate is active.
Definition: NanoVDB.h:4461
__hostdev__ ConstDenseIterator cbeginDense() const
Definition: NanoVDB.h:2981
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4106
ChildT ChildNodeType
Definition: NanoVDB.h:2808
#define NANOVDB_MAGIC_GRID
Definition: NanoVDB.h:140
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4013
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:3379
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this leaf node.
Definition: NanoVDB.h:4366
void set(const MatT &mat, const MatT &invMat, const Vec3T &translate, double taper=1.0)
Initialize the member data from 3x3 or 4x4 matrices.
Definition: NanoVDB.h:1515
static __hostdev__ KeyT CoordToKey(const CoordType &ijk)
Definition: NanoVDB.h:2579
__hostdev__ void setAvg(float avg)
Definition: NanoVDB.h:3752
MaskT< LOG2DIM > ArrayType
Definition: NanoVDB.h:3938
T Type
Definition: NanoVDB.h:458
__hostdev__ bool isActive() const
Definition: NanoVDB.h:3338
uint64_t mMagic
Definition: NanoVDB.h:1897
__hostdev__ ChannelT * setChannel(uint32_t channelID)
Change to an internal channel, assuming it exists as as blind data in the IndexGrid.
Definition: NanoVDB.h:5731
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4012
__hostdev__ bool isOff() const
Return true if none of the bits are set in this Mask.
Definition: NanoVDB.h:1213
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:5510
__hostdev__ uint32_t valueCount() const
Definition: NanoVDB.h:4097
uint64_t mGridSize
Definition: NanoVDB.h:1903
__hostdev__ NodeT * probeChild(ValueType &value) const
Definition: NanoVDB.h:2957
RootT Node3
Definition: NanoVDB.h:2413
PointType
Definition: NanoVDB.h:396
__hostdev__ void toggle(uint32_t n)
Definition: NanoVDB.h:1296
Trait to map from LEVEL to node type.
Definition: NanoVDB.h:4618
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4055
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:2789
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4056
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:2231
__hostdev__ ValueIter()
Definition: NanoVDB.h:2887
__hostdev__ const char * shortGridName() const
Definition: NanoVDB.h:5523
#define NANOVDB_MINOR_VERSION_NUMBER
Definition: NanoVDB.h:147
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4925
__hostdev__ WordT getWord(uint32_t n) const
Definition: NanoVDB.h:1156
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this leaf node.
Definition: NanoVDB.h:4379
uint64_t KeyT
Return a key based on the coordinates of a voxel.
Definition: NanoVDB.h:2577
Vec3d mVoxelSize
Definition: NanoVDB.h:1907
BuildT ValueType
Definition: NanoVDB.h:3620
uint64_t mFlags
Definition: NanoVDB.h:3148
__hostdev__ const uint32_t & getTableSize() const
Definition: NanoVDB.h:3007
int64_t mDataOffset
Definition: NanoVDB.h:1552
__hostdev__ ValueIterator()
Definition: NanoVDB.h:3316
__hostdev__ Checksum(uint32_t head, uint32_t tail)
Constructor that allows the two 32bit checksums to be initiated explicitly.
Definition: NanoVDB.h:1809
GridBlindMetaData()
Empty constructor.
Definition: NanoVDB.h:1562
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1104
__hostdev__ Mask()
Initialize all bits to zero.
Definition: NanoVDB.h:1132
__hostdev__ bool isCached2(const CoordType &ijk) const
Definition: NanoVDB.h:5116
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3488
Implements Tree::getNodeInfo(math::Coord)
Definition: NanoVDB.h:1759
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4103
__hostdev__ void setStdDeviationOn(bool on=true)
Definition: NanoVDB.h:1969
uint64_t voxelCount
Definition: NanoVDB.h:5844
__hostdev__ uint32_t gridCount() const
Return total number of grids in the buffer.
Definition: NanoVDB.h:2137
__hostdev__ bool isRootConnected() const
return true if RootData follows TreeData in memory without any extra padding
Definition: NanoVDB.h:2084
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:5588
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4845
__hostdev__ bool isValid() const
Methods related to the classification of this grid.
Definition: NanoVDB.h:2227
__hostdev__ void setValue(uint32_t n, const ValueT &v)
Definition: NanoVDB.h:3176
ValueType minimum
Definition: NanoVDB.h:6216
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2990
ChildT UpperNodeType
Definition: NanoVDB.h:2811
uint32_t mGridCount
Definition: NanoVDB.h:1902
CoordT mBBoxMin
Definition: NanoVDB.h:3626
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:5506
__hostdev__ bool isActive() const
Definition: NanoVDB.h:4330
uint64_t FloatType
Definition: NanoVDB.h:776
__hostdev__ const void * nodePtr() const
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:2008
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:6071
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:3660
__hostdev__ FloatType stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4382
Definition: NanoVDB.h:6213
__hostdev__ bool isValid() const
return true if the magic number and the version are both valid
Definition: NanoVDB.h:1952
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1703
uint64_t FloatType
Definition: NanoVDB.h:770
__hostdev__ RootT & root()
Definition: NanoVDB.h:2431
float mQuantum
Definition: NanoVDB.h:3710
char * strcpy(char *dst, const char *src)
Copy characters from src to dst.
Definition: Util.h:166
double FloatType
Definition: NanoVDB.h:800
static const int MaxNameSize
Definition: NanoVDB.h:1551
__hostdev__ bool isIndex(GridType gridType)
Return true if the GridType maps to a special index type (not a POD integer type).
Definition: NanoVDB.h:597
Map mMap
Definition: NanoVDB.h:1905
#define NANOVDB_MAGIC_FILE
Definition: NanoVDB.h:141
__hostdev__ ValueType minimum() const
Return a const reference to the minimum active value encoded in this leaf node.
Definition: NanoVDB.h:4370
float type
Definition: NanoVDB.h:515
__hostdev__ const uint32_t & tileCount() const
Return the number of tiles encoded in this root node.
Definition: NanoVDB.h:3006
__hostdev__ Vec3T applyJacobian(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1448
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:5515
Utility functions.
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2813
Bit-compacted representation of all three version numbers.
Definition: NanoVDB.h:673
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4080
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:5504
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:5509
__hostdev__ util::enable_if< BuildTraits< T >::is_index, const uint64_t & >::type valueCount() const
Return the total number of values indexed by this IndexGrid.
Definition: NanoVDB.h:2144
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4839
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, ChannelT *channelPtr)
Ctor from an IndexGrid and an external channel.
Definition: NanoVDB.h:5699
__hostdev__ bool operator>=(const Version &rhs) const
Definition: NanoVDB.h:699
typename DataType::ValueT ValueType
Definition: NanoVDB.h:3244
typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1687
static __hostdev__ uint32_t dim()
Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!) ...
Definition: NanoVDB.h:4408
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5267
typename NanoLeaf< BuildT >::ValueType Type
Definition: NanoVDB.h:6058
Definition: NanoVDB.h:3136
static __hostdev__ uint32_t dim()
Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32) ...
Definition: NanoVDB.h:3438
__hostdev__ bool operator==(const Mask &other) const
Definition: NanoVDB.h:1186
__hostdev__ uint32_t gridIndex() const
Definition: NanoVDB.h:5521
__hostdev__ ChildIter & operator++()
Definition: NanoVDB.h:2860
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2708
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4195
GridBlindMetaData(const GridBlindMetaData &other)
Copy constructor that resets mDataOffset and zeros out mName.
Definition: NanoVDB.h:1585
__hostdev__ TileIterator probe(const CoordT &ijk)
Definition: NanoVDB.h:2731
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4010
__hostdev__ const ChildT * probeChild(ValueType &value) const
Definition: NanoVDB.h:3400
__hostdev__ bool operator==(const Checksum &rhs) const
return true if the checksums are identical
Definition: NanoVDB.h:1856
__hostdev__ const NanoGrid< BuildT > & grid() const
Definition: NanoVDB.h:5562
char * strncpy(char *dst, const char *src, size_t max)
Copies the first num characters of src to dst. If the end of the source C string (which is signaled b...
Definition: Util.h:185
__hostdev__ DenseIterator beginAll() const
Definition: NanoVDB.h:1129
__hostdev__ ConstValueIterator cbeginValueAll() const
Definition: NanoVDB.h:2912
__hostdev__ bool isValid() const
return true if this meta data has a valid combination of semantic, class and value tags ...
Definition: NanoVDB.h:1641
__hostdev__ void disable()
Definition: NanoVDB.h:1845
__hostdev__ const NanoGrid< IndexT > & grid() const
Return a const reference to the IndexGrid.
Definition: NanoVDB.h:5712
static constexpr uint32_t SIZE
Definition: NanoVDB.h:1030
uint32_t mNodeCount[3]
Definition: NanoVDB.h:2345
ValueType mMaximum
Definition: NanoVDB.h:3632
typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1688
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3994
__hostdev__ uint64_t blindDataSize() const
return size in bytes of the blind data represented by this blind meta data
Definition: NanoVDB.h:1669
static __hostdev__ CoordT KeyToCoord(const KeyT &key)
Definition: NanoVDB.h:2587
__hostdev__ const Map & map() const
Return a const reference to the Map for this grid.
Definition: NanoVDB.h:2166
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:4350
__hostdev__ void setRoot(const void *root)
Definition: NanoVDB.h:2350
__hostdev__ BaseIter()
Definition: NanoVDB.h:2833
static __hostdev__ uint32_t wordCount()
Return the number of machine words used by this Mask.
Definition: NanoVDB.h:1040
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:5516
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1103
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3968
typename DataType::BuildT BuildType
Definition: NanoVDB.h:2816
__hostdev__ void setMin(const bool &)
Definition: NanoVDB.h:3962
__hostdev__ bool isValid() const
Definition: NanoVDB.h:5503
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:2785
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4052
__hostdev__ uint32_t tail() const
Definition: NanoVDB.h:1831
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:3954
__hostdev__ bool updateBBox()
Updates the local bounding box of active voxels in this node. Return true if bbox was updated...
Definition: NanoVDB.h:4565
__hostdev__ DenseIter & operator++()
Definition: NanoVDB.h:2964
Return point to the root Tile where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6163
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:2243
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:2235
__hostdev__ uint64_t last(uint32_t i) const
Definition: NanoVDB.h:4177
bool FloatType
Definition: NanoVDB.h:764
__hostdev__ TileT & operator*() const
Definition: NanoVDB.h:2683
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this internal node and an...
Definition: NanoVDB.h:3461
__hostdev__ Iterator & operator++()
Definition: NanoVDB.h:1078
Definition: NanoVDB.h:750
#define __hostdev__
Definition: Util.h:73
__hostdev__ const Checksum & checksum() const
Definition: NanoVDB.h:5532
typename DataType::FloatType FloatType
Definition: NanoVDB.h:4226
#define NANOVDB_DATA_ALIGNMENT
Definition: NanoVDB.h:133
typename DataType::Tile Tile
Definition: NanoVDB.h:2821
__hostdev__ bool isValid(const GridBlindDataClass &blindClass, const GridBlindDataSemantic &blindSemantics, const GridType &blindType)
return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid...
Definition: NanoVDB.h:632
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:5519
__hostdev__ DenseIterator operator++(int)
Definition: NanoVDB.h:1111
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4086
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:3224
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:5513
Definition: NanoVDB.h:920
Coord CoordType
Definition: NanoVDB.h:4228
Dummy type for a 16bit quantization of float point values.
Definition: NanoVDB.h:196
uint8_t ArrayType
Definition: NanoVDB.h:3809
typename Mask< Log2Dim >::template Iterator< On > MaskIterT
Definition: NanoVDB.h:3254
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:2240
__hostdev__ TreeT & tree()
Return a non-const reference to the tree.
Definition: NanoVDB.h:2157
CoordT mBBoxMin
Definition: NanoVDB.h:4155
__hostdev__ void setFirstNode(const NodeT *node)
Definition: NanoVDB.h:2362
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1723
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3458
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3784
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3167
__hostdev__ void setMin(float min)
Definition: NanoVDB.h:3746
__hostdev__ void setValue(uint32_t offset, bool)
Definition: NanoVDB.h:4009
__hostdev__ void setMax(const ValueType &v)
Definition: NanoVDB.h:3670
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4017
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3640
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:2838
__hostdev__ ChildIter()
Definition: NanoVDB.h:3274
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4192
__hostdev__ BlindDataT * getBlindData(uint32_t n)
Definition: NanoVDB.h:2299
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:3226
__hostdev__ ValueType getLastValue() const
If the last entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getLastValue() on the child.
Definition: NanoVDB.h:3481
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5066
__hostdev__ bool isOn(uint32_t n) const
Return true if the given bit is set.
Definition: NanoVDB.h:1198
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3882
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1488
uint64_t ValueType
Definition: NanoVDB.h:4034
uint16_t ArrayType
Definition: NanoVDB.h:3839
__hostdev__ const MaskType< LOG2DIM > & childMask() const
Return a const reference to the bit mask of child nodes in this internal node.
Definition: NanoVDB.h:3448
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4053
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4054
ValueT value
Definition: NanoVDB.h:2640
__hostdev__ void setDev(float dev)
Definition: NanoVDB.h:3755
Node caching at all (three) tree levels.
Definition: NanoVDB.h:5218
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:2791
__hostdev__ OnIterator beginOn() const
Definition: NanoVDB.h:1125
Definition: NanoVDB.h:1747
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4194
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5341
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4185
bool Type
Definition: NanoVDB.h:6098
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1702
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:4138
BuildT BuildType
Definition: NanoVDB.h:5242
Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode) ...
Definition: NanoVDB.h:3616
__hostdev__ const BBoxType & bbox() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2997
GridBlindDataSemantic
Blind-data Semantics that are currently understood by NanoVDB.
Definition: NanoVDB.h:419
Version mVersion
Definition: NanoVDB.h:1899
__hostdev__ void setAverageOn(bool on=true)
Definition: NanoVDB.h:1968
__hostdev__ bool isSequential() const
return true if nodes at all levels can safely be accessed with simple linear offsets ...
Definition: NanoVDB.h:2256
__hostdev__ Map()
Default constructor for the identity map.
Definition: NanoVDB.h:1388
GridFlags
Grid flags which indicate what extra information is present in the grid buffer.
Definition: NanoVDB.h:328
Metafunction used to determine if the first template parameter is a specialization of the class templ...
Definition: Util.h:451
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3850
__hostdev__ uint32_t & checksum(int i)
Definition: NanoVDB.h:1823
__hostdev__ DenseIterator()
Definition: NanoVDB.h:3389
uint32_t nameSize
Definition: NanoVDB.h:5850
ReadAccessor< ValueT, LEVEL0, LEVEL1, LEVEL2 > createAccessor(const NanoGrid< ValueT > &grid)
Free-standing function for convenient creation of a ReadAccessor with optional and customizable node ...
Definition: NanoVDB.h:5436
RootT RootType
Definition: NanoVDB.h:2403
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3720
Definition: GridHandle.h:27
float type
Definition: NanoVDB.h:508
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Definition: NanoVDB.h:5530
CoordBBox bbox
Definition: NanoVDB.h:6218
float Type
Definition: NanoVDB.h:514
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Return true if this tree is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2448
Visits all tile values and child nodes of this node.
Definition: NanoVDB.h:3383
GridType mGridType
Definition: NanoVDB.h:1909
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4137
__hostdev__ uint64_t gridSize() const
Definition: NanoVDB.h:5520
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:4931
Definition: NanoVDB.h:1061
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3490
GridType gridType
Definition: NanoVDB.h:5845
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4044
Define static boolean tests for template build types.
Definition: NanoVDB.h:435
__hostdev__ bool isFull() const
return true if the 64 bit checksum is fill, i.e. of both had and nodes
Definition: NanoVDB.h:1840
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:2238
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:2878
char * sprint(char *dst, T var1, Types...var2)
prints a variable number of string and/or numbers to a destination string
Definition: Util.h:286
Bit-mask to encode active states and facilitate sequential iterators and a fast codec for I/O compres...
Definition: NanoVDB.h:1027
CoordT CoordType
Definition: NanoVDB.h:4906
__hostdev__ const GridBlindMetaData * blindMetaData(uint32_t n) const
Returns a const reference to the blindMetaData at the specified linear offset.
Definition: NanoVDB.h:2040
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4951
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3209
static ElementType scalar(const T &v)
Definition: NanoVDB.h:744
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:3345
__hostdev__ void setMax(float max)
Definition: NanoVDB.h:3749
__hostdev__ TileIter()
Definition: NanoVDB.h:2666
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3783
__hostdev__ bool getMax() const
Definition: NanoVDB.h:3953
Index64 memUsage(const TreeT &tree, bool threaded=true)
Return the total amount of memory in bytes occupied by this tree.
Definition: Count.h:493
uint64_t mData2
Definition: NanoVDB.h:1914
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:2569
float mMinimum
Definition: NanoVDB.h:3709
__hostdev__ uint64_t offset() const
Definition: NanoVDB.h:4174
static __hostdev__ Coord OffsetToLocalCoord(uint32_t n)
Definition: NanoVDB.h:3513
__hostdev__ const Vec3d & voxelSize() const
Return a vector of the axial voxel sizes.
Definition: NanoVDB.h:5718
__hostdev__ constexpr uint32_t strlen()
return the number of characters (including null termination) required to convert enum type to a strin...
Definition: NanoVDB.h:210
typename NanoLeaf< BuildT >::FloatType FloatType
Definition: NanoVDB.h:6212
Definition: NanoVDB.h:4030
KeyT key
Definition: NanoVDB.h:2637
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2698
uint64_t FloatType
Definition: NanoVDB.h:782
__hostdev__ uint64_t pointCount() const
Definition: NanoVDB.h:4175
typename DataType::StatsT FloatType
Definition: NanoVDB.h:3245
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5128
__hostdev__ uint32_t & tail()
Definition: NanoVDB.h:1832
bool ValueType
Definition: NanoVDB.h:3935
__hostdev__ Tile * probeTile(const CoordT &ijk)
Definition: NanoVDB.h:2747
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4084
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:4851
__hostdev__ uint64_t checksum() const
return the 64 bit checksum of this instance
Definition: NanoVDB.h:1821
Dummy type for a voxel whose value equals an offset into an external value array of active values...
Definition: NanoVDB.h:175
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:4265
Top-most node of the VDB tree structure.
Definition: NanoVDB.h:2804
int64_t child
Definition: NanoVDB.h:3139
#define NANOVDB_MAJOR_VERSION_NUMBER
Definition: NanoVDB.h:146
__hostdev__ Vec3T applyJacobianF(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1457
uint8_t ArrayType
Definition: NanoVDB.h:3772
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3644
Struct to derive node type from its level in a given grid, tree or root while preserving constness...
Definition: NanoVDB.h:1680
typename GridT::TreeType type
Definition: NanoVDB.h:2381
__hostdev__ Codec toCodec(const char *str)
Definition: NanoVDB.h:5805
Definition: IndexIterator.h:43
Definition: NanoVDB.h:2845
uint32_t level
Definition: NanoVDB.h:6215
uint16_t padding
Definition: NanoVDB.h:5854
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3820
uint32_t mTileCount[3]
Definition: NanoVDB.h:2346
typename RootT::ChildNodeType Node2
Definition: NanoVDB.h:2414
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3371
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3455
ValueType mMinimum
Definition: NanoVDB.h:3631
__hostdev__ const void * blindData(uint32_t n) const
Returns a const pointer to the blindData at the specified linear offset.
Definition: NanoVDB.h:2284
__hostdev__ GridType toGridType()
Maps from a templated build type to a GridType enum.
Definition: NanoVDB.h:807
size_t strlen(const char *str)
length of a c-sting, excluding &#39;\0&#39;.
Definition: Util.h:153
MatType scale(const Vec3< typename MatType::value_type > &s)
Return a matrix that scales by s.
Definition: Mat.h:615
static __hostdev__ uint32_t dim()
Definition: NanoVDB.h:4221
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:5328
uint64_t Type
Definition: NanoVDB.h:465
uint64_t type
Definition: NanoVDB.h:473
const typename NanoRoot< BuildT >::Tile * Type
Definition: NanoVDB.h:6165
__hostdev__ float getDev() const
return the quantized standard deviation of the active values in this node
Definition: NanoVDB.h:3743
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3366
ValueT mMaximum
Definition: NanoVDB.h:3153
__hostdev__ uint64_t idx(int i, int j, int k) const
Definition: NanoVDB.h:5738
static __hostdev__ CoordT OffsetToLocalCoord(uint32_t n)
Compute the local coordinates from a linear offset.
Definition: NanoVDB.h:4392
__hostdev__ const math::BBox< CoordType > & bbox() const
Return a const reference to the bounding box in index space of active values in this internal node an...
Definition: NanoVDB.h:3470
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3487
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this root node...
Definition: NanoVDB.h:3022
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4178
__hostdev__ bool operator<=(const Version &rhs) const
Definition: NanoVDB.h:697
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:3951
T ElementType
Definition: NanoVDB.h:732
bool Type
Definition: NanoVDB.h:6176
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2107
float Type
Definition: NanoVDB.h:528
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:4964
__hostdev__ auto pos() const
Definition: NanoVDB.h:2677
uint64_t Type
Definition: NanoVDB.h:472
__hostdev__ uint32_t getMinor() const
Definition: NanoVDB.h:702
Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode) ...
Definition: NanoVDB.h:2567
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3300
Data encoded at the head of each segment of a file or stream.
Definition: NanoVDB.h:5817
__hostdev__ ValueIterator operator++(int)
Definition: NanoVDB.h:4341
__hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const
Return the index of the first blind data with specified semantic if found, otherwise -1...
Definition: NanoVDB.h:2312
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3715
__hostdev__ ValueOffIterator(const LeafNode *parent)
Definition: NanoVDB.h:4280
openvdb::GridBase Grid
Definition: Utils.h:43
__hostdev__ Mask(bool on)
Definition: NanoVDB.h:1137
__hostdev__ void setOff(uint32_t n)
Set the specified bit off.
Definition: NanoVDB.h:1224
__hostdev__ const char * gridName() const
Return a c-string with the name of this grid.
Definition: NanoVDB.h:2259
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4286
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:5507
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2405
double FloatType
Definition: NanoVDB.h:758
Version version
Definition: NanoVDB.h:5855
__hostdev__ int blindDataCount() const
Definition: NanoVDB.h:5528
uint64_t type
Definition: NanoVDB.h:480
__hostdev__ ChildT * getChild(const Tile *tile)
Returns a const reference to the child node in the specified tile.
Definition: NanoVDB.h:2772
__hostdev__ const Checksum & checksum() const
Return checksum of the grid buffer.
Definition: NanoVDB.h:2265
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5054
GridClass mGridClass
Definition: NanoVDB.h:1908
__hostdev__ Version(uint32_t data)
Constructor from a raw uint32_t data representation.
Definition: NanoVDB.h:686
Dummy type for a voxel whose value equals an offset into an external value array. ...
Definition: NanoVDB.h:172
Maps one type (e.g. the build types above) to other (actual) types.
Definition: NanoVDB.h:456
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2426
__hostdev__ uint32_t nodeCount(uint32_t level) const
Definition: NanoVDB.h:5531
__hostdev__ ValueType getLastValue() const
Return the last value in this leaf node.
Definition: NanoVDB.h:4447
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3758
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4960
__hostdev__ bool isHalf() const
Definition: NanoVDB.h:1837
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4085
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3949
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:2599
__hostdev__ ValueIterator(const LeafNode *parent)
Definition: NanoVDB.h:4313
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4812
typename RootT::ValueType ValueType
Definition: NanoVDB.h:4806
__hostdev__ DataT * data() const
Definition: NanoVDB.h:2693
__hostdev__ uint32_t id() const
Definition: NanoVDB.h:700
__hostdev__ size_t memUsage() const
Definition: NanoVDB.h:3880
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:4187
__hostdev__ const NodeT * getFirstNode() const
return a const pointer to the first node of the specified type
Definition: NanoVDB.h:2505
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:2818
__hostdev__ uint32_t getPatch() const
Definition: NanoVDB.h:703
Definition: NanoVDB.h:2915
__hostdev__ DenseIter(RootT *parent)
Definition: NanoVDB.h:2956
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this internal ...
Definition: NanoVDB.h:3467
__hostdev__ void setOn(uint32_t n)
Set the specified bit on.
Definition: NanoVDB.h:1222
__hostdev__ const uint64_t & firstOffset() const
Definition: NanoVDB.h:4051
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4258
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1497
__hostdev__ bool isCompatible() const
Definition: NanoVDB.h:704
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4955
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5124
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3819
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5131
__hostdev__ const ValueType & background() const
Return a const reference to the background value.
Definition: NanoVDB.h:2451
const typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1695
__hostdev__ int age() const
Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER.
Definition: NanoVDB.h:708
__hostdev__ bool isRootNext() const
return true if RootData is layout out immediately after TreeData in memory
Definition: NanoVDB.h:2371
__hostdev__ NodeT * getFirstNode()
return a pointer to the first node of the specified type
Definition: NanoVDB.h:2495
__hostdev__ const NodeTrait< RootT, 2 >::type * getFirstUpper() const
Definition: NanoVDB.h:2535
CheckMode
List of different modes for computing for a checksum.
Definition: NanoVDB.h:1764
__hostdev__ void setAvg(const FloatType &v)
Definition: NanoVDB.h:3671
__hostdev__ uint8_t bitWidth() const
Definition: NanoVDB.h:3879
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this root node and any of...
Definition: NanoVDB.h:3016
__hostdev__ ValueIter operator++(int)
Definition: NanoVDB.h:2900
bool isValid() const
Definition: NanoVDB.h:5822
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3161
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4320
uint16_t gridCount
Definition: NanoVDB.h:5820
__hostdev__ void extrema(ValueType &min, ValueType &max) const
Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2555
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1076
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3730
__hostdev__ T & getValue(const math::Coord &ijk, T *channelPtr) const
Return the value from a specified channel that maps to the specified coordinate.
Definition: NanoVDB.h:5757
typename Node2::ChildNodeType Node1
Definition: NanoVDB.h:2415
Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
Definition: NanoVDB.h:187
static __hostdev__ uint64_t memUsage()
return memory usage in bytes for the class
Definition: NanoVDB.h:2429
RootT RootNodeType
Definition: NanoVDB.h:2404
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:1991
__hostdev__ uint64_t first(uint32_t i) const
Definition: NanoVDB.h:4176
__hostdev__ bool isMaskOn(uint32_t offset) const
Definition: NanoVDB.h:4127
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:5518
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:2234
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5060
__hostdev__ NodeT * child() const
Definition: NanoVDB.h:2713
uint32_t countOn(uint64_t v)
Definition: Util.h:622
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, uint32_t channelID=0u)
Ctor from an IndexGrid and an integer ID of an internal channel that is assumed to exist as blind dat...
Definition: NanoVDB.h:5688
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:5566
void ArrayType
Definition: NanoVDB.h:4036
__hostdev__ ChannelT & operator()(const math::Coord &ijk) const
Definition: NanoVDB.h:5742
__hostdev__ uint32_t countOn(uint32_t i) const
Return the number of lower set bits in mask up to but excluding the i&#39;th bit.
Definition: NanoVDB.h:1052
__hostdev__ ChildIter()
Definition: NanoVDB.h:2853
const std::enable_if<!VecTraits< T >::IsVec, T >::type & min(const T &a, const T &b)
Definition: Composite.h:106
__hostdev__ bool hasStats() const
Definition: NanoVDB.h:4049
Definition: NanoVDB.h:1549
__hostdev__ uint64_t memUsage() const
Return the actual memory footprint of this root node.
Definition: NanoVDB.h:3028
int64_t child
Definition: NanoVDB.h:2638
__hostdev__ void fill(const ValueType &v)
Definition: NanoVDB.h:3680
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:4007
BuildT ArrayType
Definition: NanoVDB.h:3623
uint32_t mBlindMetadataCount
Definition: NanoVDB.h:1911
Type Pow2(Type x)
Return x2.
Definition: Math.h:548
__hostdev__ OffIterator beginOff() const
Definition: NanoVDB.h:1127
__hostdev__ DenseIterator beginDense()
Definition: NanoVDB.h:2980
BuildT BuildType
Definition: NanoVDB.h:4904
Version version
Definition: NanoVDB.h:5819
__hostdev__ bool getMin() const
Definition: NanoVDB.h:3952
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:4363
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4849
__hostdev__ bool getMax() const
Definition: NanoVDB.h:4006
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2926
bool BuildType
Definition: NanoVDB.h:3936
math::Extrema extrema(const IterT &iter, bool threaded=true)
Iterate over a scalar grid and compute extrema (min/max) of the values of the voxels that are visited...
Definition: Statistics.h:354
__hostdev__ CoordT origin() const
Definition: NanoVDB.h:2636
__hostdev__ bool operator<(const Version &rhs) const
Definition: NanoVDB.h:696
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5132
__hostdev__ const Vec3dBBox & worldBBox() const
return AABB of active values in world space
Definition: NanoVDB.h:2066
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3961
__hostdev__ uint8_t flags() const
Definition: NanoVDB.h:4384
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:3212
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2106
__hostdev__ bool isEmpty() const
Return true if this RootNode is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:3031
VecT< GridHandleT > readUncompressedGrids(const char *fileName, const typename GridHandleT::BufferType &buffer=typename GridHandleT::BufferType())
Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
Definition: NanoVDB.h:5987
uint64_t Type
Definition: NanoVDB.h:535
CoordT mBBoxMin
Definition: NanoVDB.h:4039
__hostdev__ uint32_t checksum(int i) const
Definition: NanoVDB.h:1825
__hostdev__ bool operator!=(const Mask &other) const
Definition: NanoVDB.h:1195
CoordT CoordType
Definition: NanoVDB.h:5036
Dummy type for a variable bit quantization of floating point values.
Definition: NanoVDB.h:199
__hostdev__ Vec3T indexToWorldF(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2197
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:5508
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:5517
__hostdev__ const MaskType< LOG2DIM > & getChildMask() const
Definition: NanoVDB.h:3449
StatsT mAverage
Definition: NanoVDB.h:2605
Visits all values in a leaf node, i.e. both active and inactive values.
Definition: NanoVDB.h:4302
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:3223
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:2241
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3337
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2635
Visits active tile values of this node only.
Definition: NanoVDB.h:3349
__hostdev__ const NodeTrait< RootT, LEVEL >::type * getFirstNode() const
return a const pointer to the first node of the specified level
Definition: NanoVDB.h:2524
#define NANOVDB_HOSTDEV_DISABLE_WARNING
Definition: Util.h:94
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3649
Visits all inactive values in a leaf node.
Definition: NanoVDB.h:4269
__hostdev__ const TreeType & tree() const
Return a const reference to the tree of the IndexGrid.
Definition: NanoVDB.h:5715
Like ValueIndex but with a mutable mask.
Definition: NanoVDB.h:178
typename RootT::ValueType ValueType
Definition: NanoVDB.h:2408
static __hostdev__ uint64_t memUsage(uint32_t tableSize)
Return the expected memory footprint in bytes with the specified number of tiles. ...
Definition: NanoVDB.h:3025
Definition: NanoVDB.h:1749
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:4349
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3998
GridMetaData(const NanoGrid< T > &grid)
Definition: NanoVDB.h:5468
float FloatType
Definition: NanoVDB.h:752
__hostdev__ CheckMode mode() const
return the mode of the 64 bit checksum
Definition: NanoVDB.h:1848
__hostdev__ bool isMask() const
Definition: NanoVDB.h:2236
__hostdev__ Vec3T applyJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:1993
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4958
__hostdev__ void setMax(const bool &)
Definition: NanoVDB.h:3963
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2105
OutGridT const XformOp bool bool
Definition: ValueTransformer.h:609
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:2570
typename BuildT::ValueType ValueType
Definition: NanoVDB.h:2109
__hostdev__ uint32_t nodeCount() const
Return number of nodes at LEVEL.
Definition: NanoVDB.h:2031
float ValueType
Definition: NanoVDB.h:3701
__hostdev__ Mask & operator&=(const Mask &other)
Bitwise intersection.
Definition: NanoVDB.h:1299
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5137
__hostdev__ const TreeT & tree() const
Return a const reference to the tree.
Definition: NanoVDB.h:2154
__hostdev__ bool safeCast() const
return true if the RootData follows right after the TreeData. If so, this implies that it&#39;s safe to c...
Definition: NanoVDB.h:5491
uint32_t findLowestOn(uint32_t v)
Returns the index of the lowest, i.e. least significant, on bit in the specified 32 bit word...
Definition: Util.h:502
__hostdev__ ChannelT & getValue(const math::Coord &ijk) const
Return the value from a cached channel that maps to the specified coordinate.
Definition: NanoVDB.h:5741
uint64_t mData1
Definition: NanoVDB.h:1913
BitFlags(Type mask)
Definition: NanoVDB.h:928
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:3411
bool streq(const char *lhs, const char *rhs)
Test if two null-terminated byte strings are the same.
Definition: Util.h:268
__hostdev__ Vec3T worldToIndexDirF(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2207
__hostdev__ BaseIter(DataT *data)
Definition: NanoVDB.h:2834
__hostdev__ Iterator()
Definition: NanoVDB.h:1064
typename ChildT::template MaskType< LOG2DIM > MaskT
Definition: NanoVDB.h:3133
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4105
BitFlags< 32 > mFlags
Definition: NanoVDB.h:1900
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:1995
__hostdev__ ValueOnIter operator++(int)
Definition: NanoVDB.h:2933
uint8_t mFlags
Definition: NanoVDB.h:4157
__hostdev__ void setAvg(const bool &)
Definition: NanoVDB.h:3964
__hostdev__ void setMin(const ValueType &v)
Definition: NanoVDB.h:3669
__hostdev__ bool getMin() const
Definition: NanoVDB.h:4005
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:2232
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:3307
void writeUncompressedGrid(StreamT &os, const GridData *gridData, bool raw=false)
This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also works if client code only includes PNanoVDB.h!
Definition: NanoVDB.h:5883
__hostdev__ bool isEmpty() const
test if the grid is empty, e.i the root table has size 0
Definition: NanoVDB.h:2080
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:5633
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:3290
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3707
#define __device__
Definition: Util.h:79
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Definition: NanoVDB.h:1980
int64_t mBlindMetadataOffset
Definition: NanoVDB.h:1910
float mTaperF
Definition: NanoVDB.h:1381
Implements Tree::probeLeaf(math::Coord)
Definition: NanoVDB.h:1757
__hostdev__ ChildIter(RootT *parent)
Definition: NanoVDB.h:2854
__hostdev__ void setValue(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3650
__hostdev__ CoordBBox bbox() const
Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2368
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4042
typename RootT::CoordType CoordType
Definition: NanoVDB.h:4807
__hostdev__ MagicType toMagic(uint64_t magic)
maps 64 bits of magic number to enum
Definition: NanoVDB.h:367
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:3435
GridClass gridClass
Definition: NanoVDB.h:5846
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:2230
Codec codec
Definition: NanoVDB.h:5821
__hostdev__ ChildT * probeChild(const CoordT &ijk)
Definition: NanoVDB.h:2758
RootType RootNodeType
Definition: NanoVDB.h:2104
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5299
static __hostdev__ uint64_t memUsage()
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2063
uint64_t gridSize
Definition: NanoVDB.h:5844
__hostdev__ void setValueOnly(const CoordT &ijk, const ValueType &v)
Definition: NanoVDB.h:4458
__hostdev__ const NodeT * getNode() const
Return a const point to the cached node of the specified type.
Definition: NanoVDB.h:5283
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3844
__hostdev__ bool isFloatingPoint(GridType gridType)
return true if the GridType maps to a floating point type
Definition: NanoVDB.h:558
CoordT mBBoxMin
Definition: NanoVDB.h:3941
__hostdev__ void setOff()
Set all bits off.
Definition: NanoVDB.h:1279
__hostdev__ void localToGlobalCoord(Coord &ijk) const
modifies local coordinates to global coordinates of a tile or child node
Definition: NanoVDB.h:3521
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:2790
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3332
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3491
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3038
__hostdev__ Vec3T indexToWorldGrad(const Vec3T &grad) const
transform the gradient from index space to world space.
Definition: NanoVDB.h:2189
__hostdev__ uint64_t activeVoxelCount() const
Computes a AABB of active values in world space.
Definition: NanoVDB.h:2224
__hostdev__ const LeafNode * probeLeaf(const CoordT &) const
Definition: NanoVDB.h:4482
__hostdev__ uint64_t * words()
Return a pointer to the list of words of the bit mask.
Definition: NanoVDB.h:1152
__hostdev__ void init(float min, float max, uint8_t bitWidth)
Definition: NanoVDB.h:3724
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4848
__hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
Constructor from major.minor.patch version numbers.
Definition: NanoVDB.h:688
__hostdev__ Mask & operator|=(const Mask &other)
Bitwise union.
Definition: NanoVDB.h:1307
static __hostdev__ uint32_t voxelCount()
Return the total number of voxels (e.g. values) encoded in this leaf node.
Definition: NanoVDB.h:4425
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:3130
__hostdev__ DataType * data()
Definition: NanoVDB.h:4361
__hostdev__ Mask & operator-=(const Mask &other)
Bitwise difference.
Definition: NanoVDB.h:1315
__hostdev__ Checksum(uint64_t checksum, CheckMode mode=CheckMode::Full)
Definition: NanoVDB.h:1814
static __hostdev__ bool safeCast(const NanoGrid< T > &grid)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5502
__hostdev__ ValueIterator beginValue()
Definition: NanoVDB.h:2911
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5272
uint64_t mPointCount
Definition: NanoVDB.h:4161
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4846
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4048
__hostdev__ CoordType origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:3452
__hostdev__ DenseIterator(uint32_t pos=Mask::SIZE)
Definition: NanoVDB.h:1098
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4291
MaskT< LOG2DIM > mMask
Definition: NanoVDB.h:4136
PointAccessor(const NanoGrid< BuildT > &grid)
Definition: NanoVDB.h:5549
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arit...
Definition: NanoVDB.h:1506
__hostdev__ void setValueOnly(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4179
__hostdev__ uint64_t getIndex(const math::Coord &ijk) const
Return the linear offset into a channel that maps to the specified coordinate.
Definition: NanoVDB.h:5737
ValueT mMinimum
Definition: NanoVDB.h:3152
__hostdev__ bool setGridName(const char *src)
Definition: NanoVDB.h:1970
__hostdev__ void setMask(uint32_t offset, bool v)
Definition: NanoVDB.h:4128
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3944
__hostdev__ void localToGlobalCoord(Coord &ijk) const
Converts (in place) a local index coordinate to a global index coordinate.
Definition: NanoVDB.h:4400
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5249
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4198
uint64_t ValueType
Definition: NanoVDB.h:4149
Dummy type for a 8bit quantization of float point values.
Definition: NanoVDB.h:193
__hostdev__ DataType * data()
Definition: NanoVDB.h:2424
typename NanoLeaf< BuildT >::ValueType ValueType
Definition: NanoVDB.h:6211
MagicType
Enums used to identify magic numbers recognized by NanoVDB.
Definition: NanoVDB.h:358
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5382
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2963
Dummy type for a voxel whose value equals its binary active state.
Definition: NanoVDB.h:184
uint8_t mFlags
Definition: NanoVDB.h:4041
uint64_t mPrefixSum
Definition: NanoVDB.h:4043
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:2841
__hostdev__ Vec3T applyJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:1982
__hostdev__ util::enable_if< util::is_same< T, Point >::value, const uint64_t & >::type pointCount() const
Return the total number of points indexed by this PointGrid.
Definition: NanoVDB.h:2151
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5077
typename util::match_const< ChildT, DataT >::type NodeT
Definition: NanoVDB.h:2662
__hostdev__ ChildIter(ParentT *parent)
Definition: NanoVDB.h:3279
uint32_t mGridIndex
Definition: NanoVDB.h:1901
__hostdev__ ValueOnIterator(const LeafNode *parent)
Definition: NanoVDB.h:4247
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5130
uint64_t mVoxelCount
Definition: NanoVDB.h:2347
static __hostdev__ uint32_t CoordToOffset(const CoordType &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:3505
Vec3d voxelSize
Definition: NanoVDB.h:5849
__hostdev__ uint32_t nodeCount() const
Definition: NanoVDB.h:2474
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5129
uint64_t type
Definition: NanoVDB.h:466
GridBlindDataSemantic mSemantic
Definition: NanoVDB.h:1555
__hostdev__ Vec3T applyMap(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1431
CoordT CoordType
Definition: NanoVDB.h:5244
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3376
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3646
__hostdev__ const CoordBBox & indexBBox() const
return AABB of active values in index space
Definition: NanoVDB.h:2069
__hostdev__ bool isFloatingPointVector(GridType gridType)
return true if the GridType maps to a floating point vec3.
Definition: NanoVDB.h:572
ValueT mBackground
Definition: NanoVDB.h:2602
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1724
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:3355
__hostdev__ bool isInteger(GridType gridType)
Return true if the GridType maps to a POD integer type.
Definition: NanoVDB.h:584
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2435
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:5643
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findPrev(uint32_t start) const
Definition: NanoVDB.h:1357
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3036
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2892
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4937
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1710
Codec codec
Definition: NanoVDB.h:5853
Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation...
Definition: NanoVDB.h:1376
uint64_t FloatType
Definition: NanoVDB.h:4035
float Type
Definition: NanoVDB.h:507
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache Noop since this template specializa...
Definition: NanoVDB.h:4831
__hostdev__ const Tile * probeTile(const CoordT &ijk) const
Definition: NanoVDB.h:2753
#define NANOVDB_ASSERT(x)
Definition: Util.h:50
char mName[MaxNameSize]
Definition: NanoVDB.h:1558
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1717
GridType
List of types that are currently supported by NanoVDB.
Definition: NanoVDB.h:220
Vec3dBBox worldBBox
Definition: NanoVDB.h:5847
uint32_t mValueSize
Definition: NanoVDB.h:1554
typename BuildT::CoordType CoordType
Definition: NanoVDB.h:2111
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:4266
GridBlindMetaData(int64_t dataOffset, uint64_t valueCount, uint32_t valueSize, GridBlindDataSemantic semantic, GridBlindDataClass dataClass, GridType dataType)
Definition: NanoVDB.h:1573
__hostdev__ DenseIterator & operator++()
Definition: NanoVDB.h:1106
__hostdev__ void setOn()
Set all bits on.
Definition: NanoVDB.h:1273
__hostdev__ FloatType average() const
Return a const reference to the average of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4376
Class to access points at a specific voxel location.
Definition: NanoVDB.h:5542
__hostdev__ Mask & operator^=(const Mask &other)
Bitwise XOR.
Definition: NanoVDB.h:1323
static __hostdev__ bool safeCast(const GridData *gridData)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5495
ValueT mMaximum
Definition: NanoVDB.h:2604
static __hostdev__ uint64_t alignmentPadding(const void *p)
return the smallest number of bytes that when added to the specified pointer results in a 32 byte ali...
Definition: NanoVDB.h:545
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4911
__hostdev__ Iterator operator++(int)
Definition: NanoVDB.h:1083
ValueT mMinimum
Definition: NanoVDB.h:2603
__hostdev__ ChildIter operator++(int)
Definition: NanoVDB.h:2866
bool FloatType
Definition: NanoVDB.h:794
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Return the total number of active tiles at the specified level of the tree.
Definition: NanoVDB.h:2467
C++11 implementation of std::is_floating_point.
Definition: Util.h:329
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:4190
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3327
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4833
static void * memzero(void *dst, size_t byteCount)
Zero initialization of memory.
Definition: Util.h:297
uint64_t mValueCount
Definition: NanoVDB.h:1553
__hostdev__ DataType * data()
Definition: NanoVDB.h:3433
const typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1739
__hostdev__ const BlindDataT * getBlindData() const
Get a const pointer to the blind data represented by this meta data.
Definition: NanoVDB.h:1634
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:3225
__hostdev__ ValueT getValue(uint32_t n) const
Definition: NanoVDB.h:3194
static DstT * PtrAdd(void *p, int64_t offset)
Adds a byte offset to a non-const pointer to produce another non-const pointer.
Definition: Util.h:478
__hostdev__ void setValue(const CoordT &ijk, const ValueType &v)
Sets the value at the specified location and activate its state.
Definition: NanoVDB.h:4452
__hostdev__ ValueOnIter & operator++()
Definition: NanoVDB.h:2927
__hostdev__ float getAvg() const
return the quantized average of the active values in this node
Definition: NanoVDB.h:3739
Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data and one f...
Definition: NanoVDB.h:1790
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5343
__hostdev__ uint64_t activeVoxelCount() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2460
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4844
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:5505
__hostdev__ float getMax() const
return the quantized maximum of the active values in this node
Definition: NanoVDB.h:3736
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:3129
__hostdev__ bool getDev() const
Definition: NanoVDB.h:3955
Implements Tree::getDim(math::Coord)
Definition: NanoVDB.h:1753
Definition: NanoVDB.h:2827
__hostdev__ const ValueType & background() const
Return the total number of active voxels in the root and all its child nodes.
Definition: NanoVDB.h:3003
__hostdev__ DenseIterator beginDense() const
Definition: NanoVDB.h:3424
Codec
Define compression codecs.
Definition: NanoVDB.h:5789
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Definition: NanoVDB.h:1986
__hostdev__ uint32_t countOn() const
Return the total number of set bits in this Mask.
Definition: NanoVDB.h:1043
uint8_t mFlags
Definition: NanoVDB.h:3706
__hostdev__ bool isChild(uint32_t n) const
Definition: NanoVDB.h:3206
Internal nodes of a VDB tree.
Definition: NanoVDB.h:3240
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:4242
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:4188
__hostdev__ ConstDenseIterator cbeginChildAll() const
Definition: NanoVDB.h:2982
static __hostdev__ T * alignPtr(T *p)
offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers...
Definition: NanoVDB.h:553
__hostdev__ bool isOn() const
Return true if all the bits are set in this Mask.
Definition: NanoVDB.h:1204
__hostdev__ ConstTileIterator probe(const CoordT &ijk) const
Definition: NanoVDB.h:2739
ValueT ValueType
Definition: NanoVDB.h:5243
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4843
BuildT TreeType
Definition: NanoVDB.h:2102
Base-class for quantized float leaf nodes.
Definition: NanoVDB.h:3697
uint64_t FloatType
Definition: NanoVDB.h:788
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:3147
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3039
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:2788
__hostdev__ Vec3T worldToIndexF(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2193
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3948
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1709
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4102
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3416
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:4944
__hostdev__ Vec3T worldToIndex(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2170
Definition: NanoVDB.h:2342
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4172
C++11 implementation of std::is_same.
Definition: Util.h:314
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5261
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3713
__hostdev__ void setValue(uint32_t offset, bool v)
Definition: NanoVDB.h:3956
__hostdev__ bool isActive() const
Return true if this node or any of its child nodes contain active values.
Definition: NanoVDB.h:3535
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel (regardless of state or location in the tree.) ...
Definition: NanoVDB.h:2438
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4082
Struct with all the member data of the InternalNode (useful during serialization of an openvdb Intern...
Definition: NanoVDB.h:3127
static __hostdev__ constexpr int64_t memUsage()
Definition: NanoVDB.h:3812
__hostdev__ const NanoGrid< Point > & grid() const
Definition: NanoVDB.h:5629
TileT * mPos
Definition: NanoVDB.h:2663
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3843
const typename GridT::TreeType Type
Definition: NanoVDB.h:2386
Dummy type for a 4bit quantization of float point values.
Definition: NanoVDB.h:190
__hostdev__ bool operator!=(const Checksum &rhs) const
return true if the checksums are not identical
Definition: NanoVDB.h:1860
uint32_t Type
Definition: NanoVDB.h:6112
__hostdev__ uint64_t gridSize() const
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2131
__hostdev__ Version version() const
Definition: NanoVDB.h:5535
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:3132
uint64_t mCRC64
Definition: NanoVDB.h:1796
__hostdev__ uint64_t & full()
Definition: NanoVDB.h:1828
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4011
Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6138
uint64_t type
Definition: NanoVDB.h:487
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3950
const typename GridT::TreeType type
Definition: NanoVDB.h:2387
__hostdev__ NodeTrait< RootT, 1 >::type * getFirstLower()
Definition: NanoVDB.h:2532
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4957
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this internal node and any of its child nodes...
Definition: NanoVDB.h:3464
__hostdev__ void setBlindData(const void *blindData)
Definition: NanoVDB.h:1611
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:3211
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:5654
uint8_t mFlags
Definition: NanoVDB.h:3943
T type
Definition: Util.h:387
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4104
__hostdev__ void setBBoxOn(bool on=true)
Definition: NanoVDB.h:1966
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:2237
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:2611
__hostdev__ uint32_t head() const
Definition: NanoVDB.h:1829
T type
Definition: NanoVDB.h:459
__hostdev__ ValueIterator & operator++()
Definition: NanoVDB.h:4336
__hostdev__ bool setName(const char *name)
Sets the name string.
Definition: NanoVDB.h:1619
__hostdev__ uint32_t blindDataCount() const
Return true if this grid is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2271
__hostdev__ void setChild(uint32_t n, const void *ptr)
Definition: NanoVDB.h:3169
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:1984
__hostdev__ bool operator==(const Version &rhs) const
Definition: NanoVDB.h:695
Struct with all the member data of the Grid (useful during serialization of an openvdb grid) ...
Definition: NanoVDB.h:1894
typename ChildT::template MaskType< LOG2 > MaskType
Definition: NanoVDB.h:3252
auto callNanoGrid(GridDataT *gridData, ArgsT &&...args)
Below is an example of the struct used for generic programming with callNanoGrid. ...
Definition: NanoVDB.h:4719
Implements Tree::isActive(math::Coord)
Definition: NanoVDB.h:1751
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1476
Definition: NanoVDB.h:723
__hostdev__ bool probeValue(const CoordT &ijk, ValueType &v) const
Return true if the voxel value at the given coordinate is active and updates v with the value...
Definition: NanoVDB.h:4475
__hostdev__ NodeTrait< RootT, 2 >::type * getFirstUpper()
Definition: NanoVDB.h:2534
__hostdev__ void toggle()
brief Toggle the state of all bits in the mask
Definition: NanoVDB.h:1291
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:2233
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4193
uint32_t mData0
Definition: NanoVDB.h:1912
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4253
Dummy type for indexing points into voxels.
Definition: NanoVDB.h:202
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:3445
__hostdev__ const void * blindData() const
returns a const void point to the blind data
Definition: NanoVDB.h:1623
__hostdev__ ValueType getValue(const CoordT &ijk) const
Return the voxel value at the given coordinate.
Definition: NanoVDB.h:4442
static __hostdev__ size_t memUsage()
Return memory usage in bytes for the class.
Definition: NanoVDB.h:3441
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:3285
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:3131
Definition: NanoVDB.h:897
__hostdev__ bool isActive(const CoordType &ijk) const
Return the active state of the given voxel (regardless of state or location in the tree...
Definition: NanoVDB.h:2442
__hostdev__ const ChildT * getChild(uint32_t n) const
Definition: NanoVDB.h:3188
uint32_t findHighestOn(uint32_t v)
Returns the index of the highest, i.e. most significant, on bit in the specified 32 bit word...
Definition: Util.h:572
__hostdev__ uint64_t activeVoxelCount() const
Definition: NanoVDB.h:5529
bool Type
Definition: NanoVDB.h:493
__hostdev__ const ChildT * probeChild(const CoordT &ijk) const
Definition: NanoVDB.h:2764
Definition: NanoVDB.h:1095
__hostdev__ ValueType getFirstValue() const
If the first entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getFirstValue() on the child.
Definition: NanoVDB.h:3474
StatsT mAverage
Definition: NanoVDB.h:3154
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3851
Definition: NanoVDB.h:2948
__hostdev__ const Map & map() const
Definition: NanoVDB.h:5524
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:3346
CoordT mBBoxMin
Definition: NanoVDB.h:3704
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:2858
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:2571
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4083
__hostdev__ ValueType getValue(uint32_t offset) const
Return the voxel value at the given offset.
Definition: NanoVDB.h:4439
__hostdev__ ValueIter & operator++()
Definition: NanoVDB.h:2894
MaskT mValueMask
Definition: NanoVDB.h:3149
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findNext(uint32_t start) const
Definition: NanoVDB.h:1343
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:2840
__hostdev__ uint32_t totalNodeCount() const
Definition: NanoVDB.h:2486
uint16_t mMin
Definition: NanoVDB.h:3711
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:2572
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1716
__hostdev__ Vec3d voxelSize() const
Definition: NanoVDB.h:5527
typename FloatTraits< ValueType >::FloatType FloatType
Definition: NanoVDB.h:4151
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:2783
Like ValueOnIndex but with a mutable mask.
Definition: NanoVDB.h:181
GridMetaData(const GridData *gridData)
Definition: NanoVDB.h:5475
const typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1694
__hostdev__ DataType * data()
Definition: NanoVDB.h:2123
MaskT< LOG2DIM > mValues
Definition: NanoVDB.h:3945
This is a convenient class that allows for access to grid meta-data that are independent of the value...
Definition: NanoVDB.h:5459
__hostdev__ TileIterator beginTile()
Definition: NanoVDB.h:2728
__hostdev__ int findBlindData(const char *name) const
Return the index of the first blind data with specified name if found, otherwise -1.
Definition: NanoVDB.h:2322
__hostdev__ uint32_t gridCount() const
Definition: NanoVDB.h:5522
uint32_t mTableSize
Definition: NanoVDB.h:2600
typename BuildT::BuildType BuildType
Definition: NanoVDB.h:2110
typename T::ValueType ElementType
Definition: NanoVDB.h:743
__hostdev__ bool isMask() const
Definition: NanoVDB.h:5512
__hostdev__ uint64_t memUsage() const
return memory usage in bytes for the leaf node
Definition: NanoVDB.h:4430
__hostdev__ bool isSequential() const
return true if the specified node type is laid out breadth-first in memory and has a fixed size...
Definition: NanoVDB.h:2248
Definition: NanoVDB.h:4217
typename RootT::CoordType CoordType
Definition: NanoVDB.h:2410
float type
Definition: NanoVDB.h:529
defines a tree type from a grid type while preserving constness
Definition: NanoVDB.h:2378
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5133
__hostdev__ GridType mapToGridType()
Definition: NanoVDB.h:867
__hostdev__ uint32_t nodeCount(int level) const
Definition: NanoVDB.h:2480
__hostdev__ ChannelT & operator()(int i, int j, int k) const
Definition: NanoVDB.h:5743
__hostdev__ AccessorType getAccessor() const
Return a new instance of a ReadAccessor used to access values in this grid.
Definition: NanoVDB.h:2160
Visits child nodes of this node only.
Definition: NanoVDB.h:3266
__hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:3527
typename remove_const< T >::type type
Definition: Util.h:431
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3999
__hostdev__ void setValue(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4180
__hostdev__ Checksum()
default constructor initiates checksum to EMPTY
Definition: NanoVDB.h:1804
uint64_t Type
Definition: NanoVDB.h:486
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3813
__hostdev__ ValueIterator(const InternalNode *parent)
Definition: NanoVDB.h:3321
typename Mask< 3 >::template Iterator< ON > MaskIterT
Definition: NanoVDB.h:4233
GridType mDataType
Definition: NanoVDB.h:1557
Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
Definition: NanoVDB.h:4214
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4959
__hostdev__ DataType * data()
Definition: NanoVDB.h:2992
__hostdev__ const uint64_t & valueCount() const
Return total number of values indexed by the IndexGrid.
Definition: NanoVDB.h:5721
__hostdev__ NodeTrait< RootT, LEVEL >::type * getFirstNode()
return a pointer to the first node at the specified level
Definition: NanoVDB.h:2515
typename util::match_const< Tile, DataT >::type TileT
Definition: NanoVDB.h:2661
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2634
__hostdev__ Vec3T worldToIndexDir(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2184
__hostdev__ DenseIterator cbeginChildAll() const
Definition: NanoVDB.h:3425
BuildT BuildType
Definition: NanoVDB.h:5034
__hostdev__ uint32_t rootTableSize() const
return the root table has size
Definition: NanoVDB.h:2072
bool FloatType
Definition: NanoVDB.h:3937
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:4472
double mTaperD
Definition: NanoVDB.h:1385
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3421
uint32_t dim
Definition: NanoVDB.h:6215
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:3013
MaskT mChildMask
Definition: NanoVDB.h:3150
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:4462
__hostdev__ Version()
Default constructor.
Definition: NanoVDB.h:679
__hostdev__ void setMinMaxOn(bool on=true)
Definition: NanoVDB.h:1965
static __hostdev__ uint32_t valueCount()
Definition: NanoVDB.h:4078
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3629
__hostdev__ const Tile * tile(uint32_t n) const
Returns a pointer to the tile at the specified linear offset.
Definition: NanoVDB.h:2646
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:3213
__hostdev__ ValueType getFirstValue() const
Return the first value in this leaf node.
Definition: NanoVDB.h:4445
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:3380
typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1730
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:6084
__hostdev__ ValueOnIterator(const InternalNode *parent)
Definition: NanoVDB.h:3360
__hostdev__ ConstValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:2945
Definition: NanoVDB.h:2616
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4919
typename BuildToValueMap< BuildT >::Type ValueT
Definition: NanoVDB.h:6178
FloatType mAverage
Definition: NanoVDB.h:3633
__hostdev__ TileIter(DataT *data, uint32_t pos=0)
Definition: NanoVDB.h:2667
BuildT BuildType
Definition: NanoVDB.h:4805
ValueT ValueType
Definition: NanoVDB.h:5035
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:3498
float Type
Definition: NanoVDB.h:500
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2812
StatsT mStdDevi
Definition: NanoVDB.h:2606
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4058
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2125
__hostdev__ uint32_t & head()
Definition: NanoVDB.h:1830
ValueT value
Definition: NanoVDB.h:3138
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:4168
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:5514
CoordBBox indexBBox
Definition: NanoVDB.h:5848
const std::enable_if<!VecTraits< T >::IsVec, T >::type & max(const T &a, const T &b)
Definition: Composite.h:110
__hostdev__ uint32_t rootTableSize() const
Definition: NanoVDB.h:5533
__hostdev__ TileIter & operator++()
Definition: NanoVDB.h:2678
__hostdev__ bool isCached1(const CoordType &ijk) const
Definition: NanoVDB.h:5110
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:3200
__hostdev__ ValueOnIterator beginValueOn()
Definition: NanoVDB.h:2944
__hostdev__ const ChildT * getChild(const Tile *tile) const
Definition: NanoVDB.h:2777
__hostdev__ bool isEmpty() const
return true if the 64 bit checksum is disables (unset)
Definition: NanoVDB.h:1843
__hostdev__ Iterator(uint32_t pos, const Mask *parent)
Definition: NanoVDB.h:1069
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5041
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:3658
typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1731
__hostdev__ void * nodePtr()
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:2020
__hostdev__ float getMin() const
return the quantized minimum of the active values in this node
Definition: NanoVDB.h:3733
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4961
__hostdev__ ValueOffIterator()
Definition: NanoVDB.h:4275
ChildT ChildNodeType
Definition: NanoVDB.h:3248
typename DataType::BuildT BuildType
Definition: NanoVDB.h:3246
__hostdev__ ValueOffIterator cbeginValueOff() const
Definition: NanoVDB.h:4299
__hostdev__ GridClass toGridClass(GridClass defaultClass=GridClass::Unknown)
Maps from a templated build type to a GridClass enum.
Definition: NanoVDB.h:873
typename DataType::ValueType ValueType
Definition: NanoVDB.h:4225
float type
Definition: NanoVDB.h:522
__hostdev__ uint32_t getMajor() const
Definition: NanoVDB.h:701
__hostdev__ Vec3T indexToWorldGradF(const Vec3T &grad) const
Transforms the gradient from index space to world space.
Definition: NanoVDB.h:2212
__hostdev__ const NodeTrait< TreeT, LEVEL >::type * getNode() const
Definition: NanoVDB.h:5291
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:2239
uint64_t type
Definition: NanoVDB.h:536
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:4189
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:3247
__hostdev__ void setValue(const CoordType &k, bool s, const ValueType &v)
Definition: NanoVDB.h:2626
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5340
__hostdev__ const uint64_t * words() const
Definition: NanoVDB.h:1153
__hostdev__ const GridBlindMetaData & blindMetaData(uint32_t n) const
Definition: NanoVDB.h:2305
static __hostdev__ uint32_t bitCount()
Return the number of bits available in this Mask.
Definition: NanoVDB.h:1037
__hostdev__ void setDev(const bool &)
Definition: NanoVDB.h:3965
uint64_t Type
Definition: NanoVDB.h:479
__hostdev__ Vec3T indexToWorldDir(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2179
__hostdev__ const void * getRoot() const
Get a const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2359
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:3214
__hostdev__ bool isActive() const
Return true if any of the voxel value are active in this leaf node.
Definition: NanoVDB.h:4465
GridBlindDataClass
Blind-data Classes that are currently supported by NanoVDB.
Definition: NanoVDB.h:411
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4158
__hostdev__ const void * treePtr() const
Definition: NanoVDB.h:2003
static __hostdev__ size_t memUsage()
Return the memory footprint in bytes of this Mask.
Definition: NanoVDB.h:1034
const typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1738
Visits all active values in a leaf node.
Definition: NanoVDB.h:4236
__hostdev__ const LeafNodeType * getFirstLeaf() const
Definition: NanoVDB.h:2531
__hostdev__ Vec3T indexToWorldDirF(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2202