OpenVDB  13.0.0
NanoVDB.h
Go to the documentation of this file.
1 // Copyright Contributors to the OpenVDB Project
2 // SPDX-License-Identifier: Apache-2.0
3 
4 /*!
5  \file nanovdb/NanoVDB.h
6 
7  \author Ken Museth
8 
9  \date January 8, 2020
10 
11  \brief Implements a light-weight self-contained VDB data-structure in a
12  single file! In other words, this is a significantly watered-down
13  version of the OpenVDB implementation, with few dependencies - so
14  a one-stop-shop for a minimalistic VDB data structure that run on
15  most platforms!
16 
17  \note It is important to note that NanoVDB (by design) is a read-only
18  sparse GPU (and CPU) friendly data structure intended for applications
19  like rendering and collision detection. As such it obviously lacks
20  a lot of the functionality and features of OpenVDB grids. NanoVDB
21  is essentially a compact linearized (or serialized) representation of
22  an OpenVDB tree with getValue methods only. For best performance use
23  the ReadAccessor::getValue method as opposed to the Tree::getValue
24  method. Note that since a ReadAccessor caches previous access patterns
25  it is by design not thread-safe, so use one instantiation per thread
26  (it is very light-weight). Also, it is not safe to copy accessors between
27  the GPU and CPU! In fact, client code should only interface
28  with the API of the Grid class (all other nodes of the NanoVDB data
29  structure can safely be ignored by most client codes)!
30 
31 
32  \warning NanoVDB grids can only be constructed via tools like createNanoGrid
33  or the GridBuilder. This explains why none of the grid nodes defined below
34  have public constructors or destructors.
35 
36  \details Please see the following paper for more details on the data structure:
37  K. Museth, “VDB: High-Resolution Sparse Volumes with Dynamic Topology”,
38  ACM Transactions on Graphics 32(3), 2013, which can be found here:
39  http://www.museth.org/Ken/Publications_files/Museth_TOG13.pdf
40 
41  NanoVDB was first published there: https://dl.acm.org/doi/fullHtml/10.1145/3450623.3464653
42 
43 
44  Overview: This file implements the following fundamental class that when combined
45  forms the backbone of the VDB tree data structure:
46 
47  Coord- a signed integer coordinate
48  Vec3 - a 3D vector
49  Vec4 - a 4D vector
50  BBox - a bounding box
51  Mask - a bitmask essential to the non-root tree nodes
52  Map - an affine coordinate transformation
53  Grid - contains a Tree and a map for world<->index transformations. Use
54  this class as the main API with client code!
55  Tree - contains a RootNode and getValue methods that should only be used for debugging
56  RootNode - the top-level node of the VDB data structure
57  InternalNode - the internal nodes of the VDB data structure
58  LeafNode - the lowest level tree nodes that encode voxel values and state
59  ReadAccessor - implements accelerated random access operations
60 
61  Semantics: A VDB data structure encodes values and (binary) states associated with
62  signed integer coordinates. Values encoded at the leaf node level are
63  denoted voxel values, and values associated with other tree nodes are referred
64  to as tile values, which by design cover a larger coordinate index domain.
65 
66 
67  Memory layout:
68 
69  It's important to emphasize that all the grid data (defined below) are explicitly 32 byte
70  aligned, which implies that any memory buffer that contains a NanoVDB grid must also be at
71  32 byte aligned. That is, the memory address of the beginning of a buffer (see ascii diagram below)
72  must be divisible by 32, i.e. uintptr_t(&buffer)%32 == 0! If this is not the case, the C++ standard
73  says the behaviour is undefined! Normally this is not a concerns on GPUs, because they use 256 byte
74  aligned allocations, but the same cannot be said about the CPU.
75 
76  GridData is always at the very beginning of the buffer immediately followed by TreeData!
77  The remaining nodes and blind-data are allowed to be scattered throughout the buffer,
78  though in practice they are arranged as:
79 
80  GridData: 672 bytes (e.g. magic, checksum, major, flags, index, count, size, name, map, world bbox, voxel size, class, type, offset, count)
81 
82  TreeData: 64 bytes (node counts and byte offsets)
83 
84  ... optional padding ...
85 
86  RootData: size depends on ValueType (index bbox, voxel count, tile count, min/max/avg/standard deviation)
87 
88  Array of: RootData::Tile
89 
90  ... optional padding ...
91 
92  Array of: Upper InternalNodes of size 32^3: bbox, two bit masks, 32768 tile values, and min/max/avg/standard deviation values
93 
94  ... optional padding ...
95 
96  Array of: Lower InternalNodes of size 16^3: bbox, two bit masks, 4096 tile values, and min/max/avg/standard deviation values
97 
98  ... optional padding ...
99 
100  Array of: LeafNodes of size 8^3: bbox, bit masks, 512 voxel values, and min/max/avg/standard deviation values
101 
102  ... optional padding ...
103 
104  Array of: GridBlindMetaData (288 bytes). The offset and count are defined in GridData::mBlindMetadataOffset and GridData::mBlindMetadataCount
105 
106  ... optional padding ...
107 
108  Array of: blind data
109 
110  Notation: "]---[" implies it has optional padding, and "][" implies zero padding
111 
112  [GridData(672B)][TreeData(64B)]---[RootData][N x Root::Tile]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
113  ^ ^ ^ ^ ^ ^ ^
114  | | | | | | GridBlindMetaData*
115  +-- Start of 32B aligned buffer | | | | +-- Node0::DataType* leafData
116  GridType::DataType* gridData | | | |
117  | | | +-- Node1::DataType* lowerData
118  RootType::DataType* rootData --+ | |
119  | +-- Node2::DataType* upperData
120  |
121  +-- RootType::DataType::Tile* tile
122 
123 */
124 
125 #ifndef NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
126 #define NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
127 
128 // The following two header files are the only mandatory dependencies
129 #include <nanovdb/util/Util.h>// for __hostdev__ and lots of other utility functions
130 #include <nanovdb/math/Math.h>// for Coord, BBox, Vec3, Vec4 etc
131 
132 // Do not change this value! 32 byte alignment is fixed in NanoVDB
133 #define NANOVDB_DATA_ALIGNMENT 32
134 
135 // NANOVDB_MAGIC_NUMB previously used for both grids and files (starting with v32.6.0)
136 // NANOVDB_MAGIC_GRID currently used exclusively for grids (serialized to a single buffer)
137 // NANOVDB_MAGIC_FILE currently used exclusively for files
138 // | : 0 in 30 corresponds to 0 in NanoVDB0
139 #define NANOVDB_MAGIC_NUMB 0x304244566f6e614eUL // "NanoVDB0" in hex - little endian (uint64_t)
140 #define NANOVDB_MAGIC_GRID 0x314244566f6e614eUL // "NanoVDB1" in hex - little endian (uint64_t)
141 #define NANOVDB_MAGIC_FILE 0x324244566f6e614eUL // "NanoVDB2" in hex - little endian (uint64_t)
142 #define NANOVDB_MAGIC_MASK 0x00FFFFFFFFFFFFFFUL // use this mask to remove the number
143 
144 #define NANOVDB_USE_NEW_MAGIC_NUMBERS// enables use of the new magic numbers described above
145 
146 #define NANOVDB_MAJOR_VERSION_NUMBER 32 // reflects changes to the ABI and hence also the file format
147 #define NANOVDB_MINOR_VERSION_NUMBER 9 // reflects changes to the API but not ABI
148 #define NANOVDB_PATCH_VERSION_NUMBER 0 // reflects changes that does not affect the ABI or API
149 
150 #define TBB_SUPPRESS_DEPRECATED_MESSAGES 1
151 
152 // This replaces a Coord key at the root level with a single uint64_t
153 #define NANOVDB_USE_SINGLE_ROOT_KEY
154 
155 // This replaces three levels of Coord keys in the ReadAccessor with one Coord
156 //#define NANOVDB_USE_SINGLE_ACCESSOR_KEY
157 
158 // Use this to switch between std::ofstream or FILE implementations
159 //#define NANOVDB_USE_IOSTREAMS
160 
161 #define NANOVDB_FPN_BRANCHLESS
162 
163 #if !defined(NANOVDB_ALIGN)
164 #define NANOVDB_ALIGN(n) alignas(n)
165 #endif // !defined(NANOVDB_ALIGN)
166 
167 namespace nanovdb {// =================================================================
168 
169 // --------------------------> Build types <------------------------------------
170 
171 /// @brief Dummy type for a voxel whose value equals an offset into an external value array
172 class ValueIndex{};
173 
174 /// @brief Dummy type for a voxel whose value equals an offset into an external value array of active values
175 class ValueOnIndex{};
176 
177 /// @brief Dummy type for a voxel whose value equals its binary active state
178 class ValueMask{};
179 
180 /// @brief Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
181 class Half{};
182 
183 /// @brief Dummy type for a 4bit quantization of float point values
184 class Fp4{};
185 
186 /// @brief Dummy type for a 8bit quantization of float point values
187 class Fp8{};
188 
189 /// @brief Dummy type for a 16bit quantization of float point values
190 class Fp16{};
191 
192 /// @brief Dummy type for a variable bit quantization of floating point values
193 class FpN{};
194 
195 /// @brief Dummy type for indexing points into voxels
196 class Point{};
197 
198 // --------------------------> GridType <------------------------------------
199 
200 /// @brief return the number of characters (including null termination) required to convert enum type to a string
201 ///
202 /// @note This curious implementation, which subtracts End from StrLen, avoids duplicate values in the enum!
203 template <class EnumT>
204 __hostdev__ inline constexpr uint32_t strlen(){return (uint32_t)EnumT::StrLen - (uint32_t)EnumT::End;}
205 
206 /// @brief List of types that are currently supported by NanoVDB
207 ///
208 /// @note To expand on this list do:
209 /// 1) Add the new type between Unknown and End in the enum below
210 /// 2) Add the new type to OpenToNanoVDB::processGrid that maps OpenVDB types to GridType
211 /// 3) Verify that the ConvertTrait in NanoToOpenVDB.h works correctly with the new type
212 /// 4) Add the new type to toGridType (defined below) that maps NanoVDB types to GridType
213 /// 5) Add the new type to toStr (defined below)
214 enum class GridType : uint32_t { Unknown = 0, // unknown value type - should rarely be used
215  Float = 1, // single precision floating point value
216  Double = 2, // double precision floating point value
217  Int16 = 3, // half precision signed integer value
218  Int32 = 4, // single precision signed integer value
219  Int64 = 5, // double precision signed integer value
220  Vec3f = 6, // single precision floating 3D vector
221  Vec3d = 7, // double precision floating 3D vector
222  Mask = 8, // no value, just the active state
223  Half = 9, // half precision floating point value (placeholder for IEEE 754 Half)
224  UInt32 = 10, // single precision unsigned integer value
225  Boolean = 11, // boolean value, encoded in bit array
226  RGBA8 = 12, // RGBA packed into 32bit word in reverse-order, i.e. R is lowest byte.
227  Fp4 = 13, // 4bit quantization of floating point value
228  Fp8 = 14, // 8bit quantization of floating point value
229  Fp16 = 15, // 16bit quantization of floating point value
230  FpN = 16, // variable bit quantization of floating point value
231  Vec4f = 17, // single precision floating 4D vector
232  Vec4d = 18, // double precision floating 4D vector
233  Index = 19, // index into an external array of active and inactive values
234  OnIndex = 20, // index into an external array of active values
235  //IndexMask = 21, // retired ValueIndexMask - available for future use
236  //OnIndexMask = 22, // retired ValueOnIndexMask - available for future use
237  PointIndex = 23, // voxels encode indices to co-located points
238  Vec3u8 = 24, // 8bit quantization of floating point 3D vector (only as blind data)
239  Vec3u16 = 25, // 16bit quantization of floating point 3D vector (only as blind data)
240  UInt8 = 26, // 8 bit unsigned integer values (eg 0 -> 255 gray scale)
241  End = 27,// total number of types in this enum (excluding StrLen since it's not a type)
242  StrLen = End + 11};// this entry is used to determine the minimum size of c-string
243 
244 /// @brief Maps a GridType to a c-string
245 /// @param dst destination string of size 12 or larger
246 /// @param gridType GridType enum to be mapped to a string
247 /// @return Retuns a c-string used to describe a GridType
248 __hostdev__ inline char* toStr(char *dst, GridType gridType)
249 {
250  switch (gridType){
251  case GridType::Unknown: return util::strcpy(dst, "Unknown");
252  case GridType::Float: return util::strcpy(dst, "float");
253  case GridType::Double: return util::strcpy(dst, "double");
254  case GridType::Int16: return util::strcpy(dst, "int16");
255  case GridType::Int32: return util::strcpy(dst, "int32");
256  case GridType::Int64: return util::strcpy(dst, "int64");
257  case GridType::Vec3f: return util::strcpy(dst, "Vec3f");
258  case GridType::Vec3d: return util::strcpy(dst, "Vec3d");
259  case GridType::Mask: return util::strcpy(dst, "Mask");
260  case GridType::Half: return util::strcpy(dst, "Half");
261  case GridType::UInt32: return util::strcpy(dst, "uint32");
262  case GridType::Boolean: return util::strcpy(dst, "bool");
263  case GridType::RGBA8: return util::strcpy(dst, "RGBA8");
264  case GridType::Fp4: return util::strcpy(dst, "Float4");
265  case GridType::Fp8: return util::strcpy(dst, "Float8");
266  case GridType::Fp16: return util::strcpy(dst, "Float16");
267  case GridType::FpN: return util::strcpy(dst, "FloatN");
268  case GridType::Vec4f: return util::strcpy(dst, "Vec4f");
269  case GridType::Vec4d: return util::strcpy(dst, "Vec4d");
270  case GridType::Index: return util::strcpy(dst, "Index");
271  case GridType::OnIndex: return util::strcpy(dst, "OnIndex");
272  case GridType::PointIndex: return util::strcpy(dst, "PointIndex");// StrLen = 10 + 1 + End
273  case GridType::Vec3u8: return util::strcpy(dst, "Vec3u8");
274  case GridType::Vec3u16: return util::strcpy(dst, "Vec3u16");
275  case GridType::UInt8: return util::strcpy(dst, "uint8");
276  default: return util::strcpy(dst, "End");
277  }
278 }
279 
280 // --------------------------> GridClass <------------------------------------
281 
282 /// @brief Classes (superset of OpenVDB) that are currently supported by NanoVDB
283 enum class GridClass : uint32_t { Unknown = 0,
284  LevelSet = 1, // narrow band level set, e.g. SDF
285  FogVolume = 2, // fog volume, e.g. density
286  Staggered = 3, // staggered MAC grid, e.g. velocity
287  PointIndex = 4, // point index grid
288  PointData = 5, // point data grid
289  Topology = 6, // grid with active states only (no values)
290  VoxelVolume = 7, // volume of geometric cubes, e.g. colors cubes in Minecraft
291  IndexGrid = 8, // grid whose values are offsets, e.g. into an external array
292  TensorGrid = 9, // Index grid for indexing learnable tensor features
293  End = 10,// total number of types in this enum (excluding StrLen since it's not a type)
294  StrLen = End + 7};// this entry is used to determine the minimum size of c-string
295 
296 
297 /// @brief Retuns a c-string used to describe a GridClass
298 /// @param dst destination string of size 7 or larger
299 /// @param gridClass GridClass enum to be converted to a string
300 __hostdev__ inline char* toStr(char *dst, GridClass gridClass)
301 {
302  switch (gridClass){
303  case GridClass::Unknown: return util::strcpy(dst, "?");
304  case GridClass::LevelSet: return util::strcpy(dst, "SDF");
305  case GridClass::FogVolume: return util::strcpy(dst, "FOG");
306  case GridClass::Staggered: return util::strcpy(dst, "MAC");
307  case GridClass::PointIndex: return util::strcpy(dst, "PNTIDX");// StrLen = 6 + 1 + End
308  case GridClass::PointData: return util::strcpy(dst, "PNTDAT");
309  case GridClass::Topology: return util::strcpy(dst, "TOPO");
310  case GridClass::VoxelVolume: return util::strcpy(dst, "VOX");
311  case GridClass::IndexGrid: return util::strcpy(dst, "INDEX");
312  case GridClass::TensorGrid: return util::strcpy(dst, "TENSOR");
313  default: return util::strcpy(dst, "END");
314  }
315 }
316 
317 // --------------------------> GridFlags <------------------------------------
318 
319 /// @brief Grid flags which indicate what extra information is present in the grid buffer.
320 enum class GridFlags : uint32_t {
321  HasLongGridName = 1 << 0, // grid name is longer than 256 characters
322  HasBBox = 1 << 1, // nodes contain bounding-boxes of active values
323  HasMinMax = 1 << 2, // nodes contain min/max of active values
324  HasAverage = 1 << 3, // nodes contain averages of active values
325  HasStdDeviation = 1 << 4, // nodes contain standard deviations of active values
326  IsBreadthFirst = 1 << 5, // nodes are typically arranged breadth-first in memory
327  End = 1 << 6, // use End - 1 as a mask for the 5 lower bit flags
328  StrLen = End + 23,// this entry is used to determine the minimum size of c-string
329 };
330 
331 /// @brief Retuns a c-string used to describe a GridFlags
332 /// @param dst destination string of size 23 or larger
333 /// @param gridFlags GridFlags enum to be converted to a string
334 __hostdev__ inline const char* toStr(char *dst, GridFlags gridFlags)
335 {
336  switch (gridFlags){
337  case GridFlags::HasLongGridName: return util::strcpy(dst, "has long grid name");
338  case GridFlags::HasBBox: return util::strcpy(dst, "has bbox");
339  case GridFlags::HasMinMax: return util::strcpy(dst, "has min/max");
340  case GridFlags::HasAverage: return util::strcpy(dst, "has average");
341  case GridFlags::HasStdDeviation: return util::strcpy(dst, "has standard deviation");// StrLen = 22 + 1 + End
342  case GridFlags::IsBreadthFirst: return util::strcpy(dst, "is breadth-first");
343  default: return util::strcpy(dst, "end");
344  }
345 }
346 
347 // --------------------------> MagicType <------------------------------------
348 
349 /// @brief Enums used to identify magic numbers recognized by NanoVDB
350 enum class MagicType : uint32_t { Unknown = 0,// first 64 bits are neither of the cases below
351  OpenVDB = 1,// first 32 bits = 0x56444220UL
352  NanoVDB = 2,// first 64 bits = NANOVDB_MAGIC_NUMB
353  NanoGrid = 3,// first 64 bits = NANOVDB_MAGIC_GRID
354  NanoFile = 4,// first 64 bits = NANOVDB_MAGIC_FILE
355  End = 5,
356  StrLen = End + 14};// this entry is used to determine the minimum size of c-string
357 
358 /// @brief maps 64 bits of magic number to enum
359 __hostdev__ inline MagicType toMagic(uint64_t magic)
360 {
361  switch (magic){
365  default: return (magic & ~uint32_t(0)) == 0x56444220UL ? MagicType::OpenVDB : MagicType::Unknown;
366  }
367 }
368 
369 /// @brief print 64-bit magic number to string
370 /// @param dst destination string of size 25 or larger
371 /// @param magic 64 bit magic number to be printed
372 /// @return return destination string @c dst
373 __hostdev__ inline char* toStr(char *dst, MagicType magic)
374 {
375  switch (magic){
376  case MagicType::Unknown: return util::strcpy(dst, "unknown");
377  case MagicType::NanoVDB: return util::strcpy(dst, "nanovdb");
378  case MagicType::NanoGrid: return util::strcpy(dst, "nanovdb::Grid");// StrLen = 13 + 1 + End
379  case MagicType::NanoFile: return util::strcpy(dst, "nanovdb::File");
380  case MagicType::OpenVDB: return util::strcpy(dst, "openvdb");
381  default: return util::strcpy(dst, "end");
382  }
383 }
384 
385 // --------------------------> PointType enums <------------------------------------
386 
387 // Define the type used when the points are encoded as blind data in the output grid
388 enum class PointType : uint32_t { Disable = 0,// no point information e.g. when BuildT != Point
389  PointID = 1,// linear index of type uint32_t to points
390  World64 = 2,// Vec3d in world space
391  World32 = 3,// Vec3f in world space
392  Grid64 = 4,// Vec3d in grid space
393  Grid32 = 5,// Vec3f in grid space
394  Voxel32 = 6,// Vec3f in voxel space
395  Voxel16 = 7,// Vec3u16 in voxel space
396  Voxel8 = 8,// Vec3u8 in voxel space
397  Default = 9,// output matches input, i.e. Vec3d or Vec3f in world space
398  End =10 };
399 
400 // --------------------------> GridBlindData enums <------------------------------------
401 
402 /// @brief Blind-data Classes that are currently supported by NanoVDB
403 enum class GridBlindDataClass : uint32_t { Unknown = 0,
404  IndexArray = 1,// indices typically used for mapping into other arrays
405  AttributeArray = 2,// attributes typically associated with points
406  GridName = 3,// grid names of length longer than 256 characters
407  ChannelArray = 4,// channel of values typically used by index grids
408  End = 5 };
409 
410 /// @brief Blind-data Semantics that are currently understood by NanoVDB
411 enum class GridBlindDataSemantic : uint32_t { Unknown = 0,
412  PointPosition = 1, // 3D coordinates in an unknown space
413  PointColor = 2, // color associated with point
414  PointNormal = 3,// normal associated with point
415  PointRadius = 4,// radius of point
416  PointVelocity = 5,// velocity associated with point
417  PointId = 6,// integer ID of point
418  WorldCoords = 7, // 3D coordinates in world space, e.g. (0.056, 0.8, 1,8)
419  GridCoords = 8, // 3D coordinates in grid space, e.g. (1.2, 4.0, 5.7), aka index-space
420  VoxelCoords = 9, // 3D coordinates in voxel space, e.g. (0.2, 0.0, 0.7)
421  LevelSet = 10, // narrow band level set, e.g. SDF
422  FogVolume = 11, // fog volume, e.g. density
423  Staggered = 12, // staggered MAC grid, e.g. velocity
424  End = 13 };
425 
426 /// @brief Maps from GridBlindDataSemantic to GridClass
427 /// @note Useful when converting an IndexGrid with blind data of type T into a Grid<T>
428 /// @param semantics GridBlindDataSemantic
429 /// @param defaultClass Default return type used for no match
430 /// @return GridClass
432  GridClass defaultClass = GridClass::Unknown)
433 {
434  switch (semantics){
436  return GridClass::PointData;
438  return GridClass::PointData;
440  return GridClass::PointData;
442  return GridClass::PointData;
444  return GridClass::PointData;
446  return GridClass::PointIndex;
448  return GridClass::LevelSet;
450  return GridClass::FogVolume;
452  return GridClass::Staggered;
453  default:
454  return defaultClass;
455  }
456 }
457 
458 /// @brief Maps from GridClass to GridBlindDataSemantic.
459 /// @note Useful when converting a Grid<T> into an IndexGrid with blind data of type T.
460 /// @param gridClass GridClass
461 /// @param defaultSemantic Default return type used for no match
462 /// @return GridBlindDataSemantic
465 {
466  switch (gridClass){
469  case GridClass::LevelSet:
475  default:
476  return defaultSemantic;
477  }
478 }
479 
480 // --------------------------> BuildTraits <------------------------------------
481 
482 /// @brief Define static boolean tests for template build types
483 template<typename T>
485 {
486  // check if T is an index type
487  static constexpr bool is_index = util::is_same<T, ValueIndex, ValueOnIndex>::value;
488  static constexpr bool is_onindex = util::is_same<T, ValueOnIndex>::value;
489  static constexpr bool is_offindex = util::is_same<T, ValueIndex>::value;
490  // check if T is a compressed float type with fixed bit precision
491  static constexpr bool is_FpX = util::is_same<T, Fp4, Fp8, Fp16>::value;
492  // check if T is a compressed float type with fixed or variable bit precision
493  static constexpr bool is_Fp = util::is_same<T, Fp4, Fp8, Fp16, FpN>::value;
494  // check if T is a POD float type, i.e float or double
495  static constexpr bool is_float = util::is_floating_point<T>::value;
496  // check if T is a template specialization of LeafData<T>, i.e. has T mValues[512]
497  static constexpr bool is_special = is_index || is_Fp || util::is_same<T, Point, bool, ValueMask>::value;
498 }; // BuildTraits
499 
500 // --------------------------> BuildToValueMap <------------------------------------
501 
502 /// @brief Maps one type (e.g. the build types above) to other (actual) types
503 template<typename T>
505 {
506  using Type = T;
507  using type = T;
508 };
509 
510 template<>
512 {
513  using Type = uint64_t;
514  using type = uint64_t;
515 };
516 
517 template<>
519 {
520  using Type = uint64_t;
521  using type = uint64_t;
522 };
523 
524 template<>
526 {
527  using Type = bool;
528  using type = bool;
529 };
530 
531 template<>
533 {
534  using Type = float;
535  using type = float;
536 };
537 
538 template<>
540 {
541  using Type = float;
542  using type = float;
543 };
544 
545 template<>
547 {
548  using Type = float;
549  using type = float;
550 };
551 
552 template<>
554 {
555  using Type = float;
556  using type = float;
557 };
558 
559 template<>
561 {
562  using Type = float;
563  using type = float;
564 };
565 
566 template<>
568 {
569  using Type = uint64_t;
570  using type = uint64_t;
571 };
572 
573 template<typename T>
575 
576 // --------------------------> utility functions related to alignment <------------------------------------
577 
578 /// @brief return true if the specified pointer is 32 byte aligned
579 __hostdev__ inline static bool isAligned(const void* p){return uint64_t(p) % NANOVDB_DATA_ALIGNMENT == 0;}
580 
581 /// @brief return the smallest number of bytes that when added to the specified pointer results in a 32 byte aligned pointer.
582 __hostdev__ inline static uint64_t alignmentPadding(const void* p)
583 {
584  NANOVDB_ASSERT(p);
586 }
587 
588 /// @brief offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers.
589 template <typename T>
590 __hostdev__ inline static T* alignPtr(T* p){return util::PtrAdd<T>(p, alignmentPadding(p));}
591 
592 // --------------------------> isFloatingPoint(GridType) <------------------------------------
593 
594 /// @brief return true if the GridType maps to a floating point type
595 __hostdev__ inline bool isFloatingPoint(GridType gridType)
596 {
597  return gridType == GridType::Float ||
598  gridType == GridType::Double ||
599  gridType == GridType::Half ||
600  gridType == GridType::Fp4 ||
601  gridType == GridType::Fp8 ||
602  gridType == GridType::Fp16 ||
603  gridType == GridType::FpN;
604 }
605 
606 // --------------------------> isFloatingPointVector(GridType) <------------------------------------
607 
608 /// @brief return true if the GridType maps to a floating point vec3.
610 {
611  return gridType == GridType::Vec3f ||
612  gridType == GridType::Vec3d ||
613  gridType == GridType::Vec4f ||
614  gridType == GridType::Vec4d;
615 }
616 
617 // --------------------------> isInteger(GridType) <------------------------------------
618 
619 /// @brief Return true if the GridType maps to a POD integer type.
620 /// @details These types are used to associate a voxel with a POD integer type
621 __hostdev__ inline bool isInteger(GridType gridType)
622 {
623  return gridType == GridType::Int16 ||
624  gridType == GridType::Int32 ||
625  gridType == GridType::Int64 ||
626  gridType == GridType::UInt32||
627  gridType == GridType::UInt8;
628 }
629 
630 // --------------------------> isIndex(GridType) <------------------------------------
631 
632 /// @brief Return true if the GridType maps to a special index type (not a POD integer type).
633 /// @details These types are used to index from a voxel into an external array of values, e.g. sidecar or blind data.
634 __hostdev__ inline bool isIndex(GridType gridType)
635 {
636  return gridType == GridType::Index ||// index both active and inactive values
637  gridType == GridType::OnIndex;// index active values only
638 }
639 
640 // --------------------------> isValue(GridType, GridClass) <------------------------------------
641 
642 /// @brief return true if the combination of GridType and GridClass is valid.
643 __hostdev__ inline bool isValid(GridType gridType, GridClass gridClass)
644 {
645  if (gridClass == GridClass::LevelSet || gridClass == GridClass::FogVolume) {
646  return isFloatingPoint(gridType);
647  } else if (gridClass == GridClass::Staggered) {
648  return isFloatingPointVector(gridType);
649  } else if (gridClass == GridClass::PointIndex || gridClass == GridClass::PointData) {
650  return gridType == GridType::PointIndex || gridType == GridType::UInt32;
651  } else if (gridClass == GridClass::Topology) {
652  return gridType == GridType::Mask;
653  } else if (gridClass == GridClass::IndexGrid) {
654  return isIndex(gridType);
655  } else if (gridClass == GridClass::VoxelVolume) {
656  return gridType == GridType::RGBA8 || gridType == GridType::Float ||
657  gridType == GridType::Double || gridType == GridType::Vec3f ||
658  gridType == GridType::Vec3d || gridType == GridType::UInt32 ||
659  gridType == GridType::UInt8;
660  }
661  return gridClass < GridClass::End && gridType < GridType::End; // any valid combination
662 }
663 
664 // --------------------------> validation of blind data meta data <------------------------------------
665 
666 /// @brief return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid.
667 __hostdev__ inline bool isValid(const GridBlindDataClass& blindClass,
668  const GridBlindDataSemantic& blindSemantics,
669  const GridType& blindType)
670 {
671  bool test = false;
672  switch (blindClass) {
674  test = (blindSemantics == GridBlindDataSemantic::Unknown ||
675  blindSemantics == GridBlindDataSemantic::PointId) &&
676  isInteger(blindType);
677  break;
679  if (blindSemantics == GridBlindDataSemantic::PointPosition ||
680  blindSemantics == GridBlindDataSemantic::WorldCoords) {
681  test = blindType == GridType::Vec3f || blindType == GridType::Vec3d;
682  } else if (blindSemantics == GridBlindDataSemantic::GridCoords) {
683  test = blindType == GridType::Vec3f;
684  } else if (blindSemantics == GridBlindDataSemantic::VoxelCoords) {
685  test = blindType == GridType::Vec3f || blindType == GridType::Vec3u8 || blindType == GridType::Vec3u16;
686  } else {
687  test = blindSemantics != GridBlindDataSemantic::PointId;
688  }
689  break;
691  test = blindSemantics == GridBlindDataSemantic::Unknown && blindType == GridType::Unknown;
692  break;
693  default: // captures blindClass == Unknown and ChannelArray
694  test = blindClass < GridBlindDataClass::End &&
695  blindSemantics < GridBlindDataSemantic::End &&
696  blindType < GridType::End; // any valid combination
697  break;
698  }
699  //if (!test) printf("Invalid combination: GridBlindDataClass=%u, GridBlindDataSemantic=%u, GridType=%u\n",(uint32_t)blindClass, (uint32_t)blindSemantics, (uint32_t)blindType);
700  return test;
701 }
702 
703 // ----------------------------> Version class <-------------------------------------
704 
705 /// @brief Bit-compacted representation of all three version numbers
706 ///
707 /// @details major is the top 11 bits, minor is the 11 middle bits and patch is the lower 10 bits
708 class Version
709 {
710  uint32_t mData; // 11 + 11 + 10 bit packing of major + minor + patch
711 public:
712  static constexpr uint32_t End = 0, StrLen = 8;// for strlen<Version>()
713  /// @brief Default constructor
715  : mData(uint32_t(NANOVDB_MAJOR_VERSION_NUMBER) << 21 |
716  uint32_t(NANOVDB_MINOR_VERSION_NUMBER) << 10 |
718  {
719  }
720  /// @brief Constructor from a raw uint32_t data representation
721  __hostdev__ Version(uint32_t data) : mData(data) {}
722  /// @brief Constructor from major.minor.patch version numbers
723  __hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
724  : mData(major << 21 | minor << 10 | patch)
725  {
726  NANOVDB_ASSERT(major < (1u << 11)); // max value of major is 2047
727  NANOVDB_ASSERT(minor < (1u << 11)); // max value of minor is 2047
728  NANOVDB_ASSERT(patch < (1u << 10)); // max value of patch is 1023
729  }
730  __hostdev__ bool operator==(const Version& rhs) const { return mData == rhs.mData; }
731  __hostdev__ bool operator<( const Version& rhs) const { return mData < rhs.mData; }
732  __hostdev__ bool operator<=(const Version& rhs) const { return mData <= rhs.mData; }
733  __hostdev__ bool operator>( const Version& rhs) const { return mData > rhs.mData; }
734  __hostdev__ bool operator>=(const Version& rhs) const { return mData >= rhs.mData; }
735  __hostdev__ uint32_t id() const { return mData; }
736  __hostdev__ uint32_t getMajor() const { return (mData >> 21) & ((1u << 11) - 1); }
737  __hostdev__ uint32_t getMinor() const { return (mData >> 10) & ((1u << 11) - 1); }
738  __hostdev__ uint32_t getPatch() const { return mData & ((1u << 10) - 1); }
739  __hostdev__ bool isCompatible() const { return this->getMajor() == uint32_t(NANOVDB_MAJOR_VERSION_NUMBER); }
740  /// @brief Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER
741  /// @return return 0 if the major version equals NANOVDB_MAJOR_VERSION_NUMBER, else a negative age if this
742  /// instance has a smaller major verion (is older), and a positive age if it is newer, i.e. larger.
743  __hostdev__ int age() const {return int(this->getMajor()) - int(NANOVDB_MAJOR_VERSION_NUMBER);}
744 }; // Version
745 
746 /// @brief print the verion number to a c-string
747 /// @param dst destination string of size 8 or more
748 /// @param v version to be printed
749 /// @return returns destination string @c dst
750 __hostdev__ inline char* toStr(char *dst, const Version &v)
751 {
752  return util::sprint(dst, v.getMajor(), ".",v.getMinor(), ".",v.getPatch());
753 }
754 
755 // ----------------------------> TensorTraits <--------------------------------------
756 
757 template<typename T, int Rank = (util::is_specialization<T, math::Vec3>::value || util::is_specialization<T, math::Vec4>::value || util::is_same<T, math::Rgba8>::value) ? 1 : 0>
759 
760 template<typename T>
761 struct TensorTraits<T, 0>
762 {
763  static const int Rank = 0; // i.e. scalar
764  static const bool IsScalar = true;
765  static const bool IsVector = false;
766  static const int Size = 1;
767  using ElementType = T;
768  static T scalar(const T& s) { return s; }
769 };
770 
771 template<typename T>
772 struct TensorTraits<T, 1>
773 {
774  static const int Rank = 1; // i.e. vector
775  static const bool IsScalar = false;
776  static const bool IsVector = true;
777  static const int Size = T::SIZE;
778  using ElementType = typename T::ValueType;
779  static ElementType scalar(const T& v) { return v.length(); }
780 };
781 
782 // ----------------------------> FloatTraits <--------------------------------------
783 
784 template<typename T, int = sizeof(typename TensorTraits<T>::ElementType)>
786 {
787  using FloatType = float;
788 };
789 
790 template<typename T>
791 struct FloatTraits<T, 8>
792 {
793  using FloatType = double;
794 };
795 
796 template<>
797 struct FloatTraits<bool, 1>
798 {
799  using FloatType = bool;
800 };
801 
802 template<>
803 struct FloatTraits<ValueIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
804 {
805  using FloatType = uint64_t;
806 };
807 
808 template<>
809 struct FloatTraits<ValueOnIndex, 1> // size of empty class in C++ is 1 byte and not 0 byte
810 {
811  using FloatType = uint64_t;
812 };
813 
814 template<>
815 struct FloatTraits<ValueMask, 1> // size of empty class in C++ is 1 byte and not 0 byte
816 {
817  using FloatType = bool;
818 };
819 
820 template<>
821 struct FloatTraits<Point, 1> // size of empty class in C++ is 1 byte and not 0 byte
822 {
823  using FloatType = double;
824 };
825 
826 // ----------------------------> mapping BuildType -> GridType <--------------------------------------
827 
828 /// @brief Maps from a templated build type to a GridType enum
829 template<typename BuildT>
831 {
832  if constexpr(util::is_same<BuildT, float>::value) { // resolved at compile-time
833  return GridType::Float;
834  } else if constexpr(util::is_same<BuildT, double>::value) {
835  return GridType::Double;
836  } else if constexpr(util::is_same<BuildT, int16_t>::value) {
837  return GridType::Int16;
838  } else if constexpr(util::is_same<BuildT, int32_t>::value) {
839  return GridType::Int32;
840  } else if constexpr(util::is_same<BuildT, int64_t>::value) {
841  return GridType::Int64;
842  } else if constexpr(util::is_same<BuildT, Vec3f>::value) {
843  return GridType::Vec3f;
844  } else if constexpr(util::is_same<BuildT, Vec3d>::value) {
845  return GridType::Vec3d;
846  } else if constexpr(util::is_same<BuildT, uint32_t>::value) {
847  return GridType::UInt32;
848  } else if constexpr(util::is_same<BuildT, ValueMask>::value) {
849  return GridType::Mask;
850  } else if constexpr(util::is_same<BuildT, Half>::value) {
851  return GridType::Half;
852  } else if constexpr(util::is_same<BuildT, ValueIndex>::value) {
853  return GridType::Index;
854  } else if constexpr(util::is_same<BuildT, ValueOnIndex>::value) {
855  return GridType::OnIndex;
856  } else if constexpr(util::is_same<BuildT, bool>::value) {
857  return GridType::Boolean;
858  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
859  return GridType::RGBA8;
860  } else if constexpr(util::is_same<BuildT, Fp4>::value) {
861  return GridType::Fp4;
862  } else if constexpr(util::is_same<BuildT, Fp8>::value) {
863  return GridType::Fp8;
864  } else if constexpr(util::is_same<BuildT, Fp16>::value) {
865  return GridType::Fp16;
866  } else if constexpr(util::is_same<BuildT, FpN>::value) {
867  return GridType::FpN;
868  } else if constexpr(util::is_same<BuildT, Vec4f>::value) {
869  return GridType::Vec4f;
870  } else if constexpr(util::is_same<BuildT, Vec4d>::value) {
871  return GridType::Vec4d;
872  } else if constexpr(util::is_same<BuildT, Point>::value) {
873  return GridType::PointIndex;
874  } else if constexpr(util::is_same<BuildT, Vec3u8>::value) {
875  return GridType::Vec3u8;
876  } else if constexpr(util::is_same<BuildT, Vec3u16>::value) {
877  return GridType::Vec3u16;
878  } else if constexpr(util::is_same<BuildT, uint8_t>::value) {
879  return GridType::UInt8;
880  }
881  return GridType::Unknown;
882 }// toGridType
883 
884 template<typename BuildT>
885 [[deprecated("Use toGridType<T>() instead.")]]
886 __hostdev__ inline GridType mapToGridType(){return toGridType<BuildT>();}
887 
888 // ----------------------------> mapping BuildType -> GridClass <--------------------------------------
889 
890 /// @brief Maps from a templated build type to a GridClass enum
891 template<typename BuildT>
893 {
895  return GridClass::Topology;
896  } else if constexpr(BuildTraits<BuildT>::is_index) {
897  return GridClass::IndexGrid;
898  } else if constexpr(util::is_same<BuildT, math::Rgba8>::value) {
899  return GridClass::VoxelVolume;
900  } else if constexpr(util::is_same<BuildT, Point>::value) {
901  return GridClass::PointIndex;
902  }
903  return defaultClass;
904 }
905 
906 template<typename BuildT>
907 [[deprecated("Use toGridClass<T>() instead.")]]
909 {
910  return toGridClass<BuildT>();
911 }
912 
913 // ----------------------------> BitFlags <--------------------------------------
914 
915 template<int N>
916 struct BitArray;
917 template<>
918 struct BitArray<8>
919 {
920  uint8_t mFlags{0};
921 };
922 template<>
923 struct BitArray<16>
924 {
925  uint16_t mFlags{0};
926 };
927 template<>
928 struct BitArray<32>
929 {
930  uint32_t mFlags{0};
931 };
932 template<>
933 struct BitArray<64>
934 {
935  uint64_t mFlags{0};
936 };
937 
938 template<int N>
939 class BitFlags : public BitArray<N>
940 {
941 protected:
942  using BitArray<N>::mFlags;
943 
944 public:
945  using Type = decltype(mFlags);
946  BitFlags() {}
947  BitFlags(Type mask) : BitArray<N>{mask} {}
948  BitFlags(std::initializer_list<uint8_t> list)
949  {
950  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
951  }
952  template<typename MaskT>
953  BitFlags(std::initializer_list<MaskT> list)
954  {
955  for (auto mask : list) mFlags |= static_cast<Type>(mask);
956  }
957  __hostdev__ Type data() const { return mFlags; }
958  __hostdev__ Type& data() { return mFlags; }
959  __hostdev__ void initBit(std::initializer_list<uint8_t> list)
960  {
961  mFlags = 0u;
962  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
963  }
964  template<typename MaskT>
965  __hostdev__ void initMask(std::initializer_list<MaskT> list)
966  {
967  mFlags = 0u;
968  for (auto mask : list) mFlags |= static_cast<Type>(mask);
969  }
970  __hostdev__ Type getFlags() const { return mFlags & (static_cast<Type>(GridFlags::End) - 1u); } // mask out everything except relevant bits
971 
972  __hostdev__ void setOn() { mFlags = ~Type(0u); }
973  __hostdev__ void setOff() { mFlags = Type(0u); }
974 
975  __hostdev__ void setBitOn(uint8_t bit) { mFlags |= static_cast<Type>(1 << bit); }
976  __hostdev__ void setBitOff(uint8_t bit) { mFlags &= ~static_cast<Type>(1 << bit); }
977 
978  __hostdev__ void setBitOn(std::initializer_list<uint8_t> list)
979  {
980  for (auto bit : list) mFlags |= static_cast<Type>(1 << bit);
981  }
982  __hostdev__ void setBitOff(std::initializer_list<uint8_t> list)
983  {
984  for (auto bit : list) mFlags &= ~static_cast<Type>(1 << bit);
985  }
986 
987  template<typename MaskT>
988  __hostdev__ void setMaskOn(MaskT mask) { mFlags |= static_cast<Type>(mask); }
989  template<typename MaskT>
990  __hostdev__ void setMaskOff(MaskT mask) { mFlags &= ~static_cast<Type>(mask); }
991 
992  template<typename MaskT>
993  __hostdev__ void setMaskOn(std::initializer_list<MaskT> list)
994  {
995  for (auto mask : list) mFlags |= static_cast<Type>(mask);
996  }
997  template<typename MaskT>
998  __hostdev__ void setMaskOff(std::initializer_list<MaskT> list)
999  {
1000  for (auto mask : list) mFlags &= ~static_cast<Type>(mask);
1001  }
1002 
1003  __hostdev__ void setBit(uint8_t bit, bool on) { on ? this->setBitOn(bit) : this->setBitOff(bit); }
1004  template<typename MaskT>
1005  __hostdev__ void setMask(MaskT mask, bool on) { on ? this->setMaskOn(mask) : this->setMaskOff(mask); }
1006 
1007  __hostdev__ bool isOn() const { return mFlags == ~Type(0u); }
1008  __hostdev__ bool isOff() const { return mFlags == Type(0u); }
1009  __hostdev__ bool isBitOn(uint8_t bit) const { return 0 != (mFlags & static_cast<Type>(1 << bit)); }
1010  __hostdev__ bool isBitOff(uint8_t bit) const { return 0 == (mFlags & static_cast<Type>(1 << bit)); }
1011  template<typename MaskT>
1012  __hostdev__ bool isMaskOn(MaskT mask) const { return 0 != (mFlags & static_cast<Type>(mask)); }
1013  template<typename MaskT>
1014  __hostdev__ bool isMaskOff(MaskT mask) const { return 0 == (mFlags & static_cast<Type>(mask)); }
1015  /// @brief return true if any of the masks in the list are on
1016  template<typename MaskT>
1017  __hostdev__ bool isMaskOn(std::initializer_list<MaskT> list) const
1018  {
1019  for (auto mask : list) {
1020  if (0 != (mFlags & static_cast<Type>(mask))) return true;
1021  }
1022  return false;
1023  }
1024  /// @brief return true if any of the masks in the list are off
1025  template<typename MaskT>
1026  __hostdev__ bool isMaskOff(std::initializer_list<MaskT> list) const
1027  {
1028  for (auto mask : list) {
1029  if (0 == (mFlags & static_cast<Type>(mask))) return true;
1030  }
1031  return false;
1032  }
1033  /// @brief required for backwards compatibility
1034  __hostdev__ BitFlags& operator=(Type n)
1035  {
1036  mFlags = n;
1037  return *this;
1038  }
1039 }; // BitFlags<N>
1040 
1041 // ----------------------------> Mask <--------------------------------------
1042 
1043 /// @brief Bit-mask to encode active states and facilitate sequential iterators
1044 /// and a fast codec for I/O compression.
1045 template<uint32_t LOG2DIM>
1046 class Mask
1047 {
1048 public:
1049  static constexpr uint32_t SIZE = 1U << (3 * LOG2DIM); // Number of bits in mask
1050  static constexpr uint32_t WORD_COUNT = SIZE >> 6; // Number of 64 bit words
1051 
1052  /// @brief Return the memory footprint in bytes of this Mask
1053  __hostdev__ static size_t memUsage() { return sizeof(Mask); }
1054 
1055  /// @brief Return the number of bits available in this Mask
1056  __hostdev__ static uint32_t bitCount() { return SIZE; }
1057 
1058  /// @brief Return the number of machine words used by this Mask
1059  __hostdev__ static uint32_t wordCount() { return WORD_COUNT; }
1060 
1061  /// @brief Return the total number of set bits in this Mask
1062  __hostdev__ uint32_t countOn() const
1063  {
1064  uint32_t sum = 0;
1065  for (const uint64_t *w = mWords, *q = w + WORD_COUNT; w != q; ++w)
1066  sum += util::countOn(*w);
1067  return sum;
1068  }
1069 
1070  /// @brief Return the number of lower set bits in mask up to but excluding the i'th bit
1071  inline __hostdev__ uint32_t countOn(uint32_t i) const
1072  {
1073  uint32_t n = i >> 6, sum = util::countOn(mWords[n] & ((uint64_t(1) << (i & 63u)) - 1u));
1074  for (const uint64_t* w = mWords; n--; ++w)
1075  sum += util::countOn(*w);
1076  return sum;
1077  }
1078 
1079  template<bool On>
1080  class Iterator
1081  {
1082  public:
1084  : mPos(Mask::SIZE)
1085  , mParent(nullptr)
1086  {
1087  }
1088  __hostdev__ Iterator(uint32_t pos, const Mask* parent)
1089  : mPos(pos)
1090  , mParent(parent)
1091  {
1092  }
1093  Iterator& operator=(const Iterator&) = default;
1094  __hostdev__ uint32_t operator*() const { return mPos; }
1095  __hostdev__ uint32_t pos() const { return mPos; }
1096  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1098  {
1099  mPos = mParent->findNext<On>(mPos + 1);
1100  return *this;
1101  }
1103  {
1104  auto tmp = *this;
1105  ++(*this);
1106  return tmp;
1107  }
1108 
1109  private:
1110  uint32_t mPos;
1111  const Mask* mParent;
1112  }; // Member class Iterator
1113 
1115  {
1116  public:
1118  : mPos(pos)
1119  {
1120  }
1121  DenseIterator& operator=(const DenseIterator&) = default;
1122  __hostdev__ uint32_t operator*() const { return mPos; }
1123  __hostdev__ uint32_t pos() const { return mPos; }
1124  __hostdev__ operator bool() const { return mPos != Mask::SIZE; }
1126  {
1127  ++mPos;
1128  return *this;
1129  }
1131  {
1132  auto tmp = *this;
1133  ++mPos;
1134  return tmp;
1135  }
1136 
1137  private:
1138  uint32_t mPos;
1139  }; // Member class DenseIterator
1140 
1143 
1144  __hostdev__ OnIterator beginOn() const { return OnIterator(this->findFirst<true>(), this); }
1145 
1146  __hostdev__ OffIterator beginOff() const { return OffIterator(this->findFirst<false>(), this); }
1147 
1149 
1150  /// @brief Initialize all bits to zero.
1152  {
1153  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1154  mWords[i] = 0;
1155  }
1157  {
1158  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1159  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1160  mWords[i] = v;
1161  }
1162 
1163  /// @brief Copy constructor
1164  __hostdev__ Mask(const Mask& other)
1165  {
1166  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1167  mWords[i] = other.mWords[i];
1168  }
1169 
1170  /// @brief Return a pointer to the list of words of the bit mask
1171  __hostdev__ uint64_t* words() { return mWords; }
1172  __hostdev__ const uint64_t* words() const { return mWords; }
1173 
1174  template<typename WordT>
1175  __hostdev__ WordT getWord(uint32_t n) const
1176  {
1178  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1179  return reinterpret_cast<WordT*>(mWords)[n];
1180  }
1181  template<typename WordT>
1182  __hostdev__ void setWord(WordT w, uint32_t n)
1183  {
1185  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1186  reinterpret_cast<WordT*>(mWords)[n] = w;
1187  }
1188 
1189  /// @brief Assignment operator that works with openvdb::util::NodeMask
1190  template<typename MaskT = Mask>
1192  {
1193  static_assert(sizeof(Mask) == sizeof(MaskT), "Mismatching sizeof");
1194  static_assert(WORD_COUNT == MaskT::WORD_COUNT, "Mismatching word count");
1195  static_assert(LOG2DIM == MaskT::LOG2DIM, "Mismatching LOG2DIM");
1196  auto* src = reinterpret_cast<const uint64_t*>(&other);
1197  for (uint64_t *dst = mWords, *end = dst + WORD_COUNT; dst != end; ++dst)
1198  *dst = *src++;
1199  return *this;
1200  }
1201 
1202  //__hostdev__ Mask& operator=(const Mask& other){return *util::memcpy(this, &other);}
1203  Mask& operator=(const Mask&) = default;
1204 
1205  __hostdev__ bool operator==(const Mask& other) const
1206  {
1207  for (uint32_t i = 0; i < WORD_COUNT; ++i) {
1208  if (mWords[i] != other.mWords[i])
1209  return false;
1210  }
1211  return true;
1212  }
1213 
1214  __hostdev__ bool operator!=(const Mask& other) const { return !((*this) == other); }
1215 
1216  /// @brief Return true if the given bit is set.
1217  __hostdev__ bool isOn(uint32_t n) const { return 0 != (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1218 
1219  /// @brief Return true if the given bit is NOT set.
1220  __hostdev__ bool isOff(uint32_t n) const { return 0 == (mWords[n >> 6] & (uint64_t(1) << (n & 63))); }
1221 
1222  /// @brief Return true if all the bits are set in this Mask.
1223  __hostdev__ bool isOn() const
1224  {
1225  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1226  if (mWords[i] != ~uint64_t(0))
1227  return false;
1228  return true;
1229  }
1230 
1231  /// @brief Return true if none of the bits are set in this Mask.
1232  __hostdev__ bool isOff() const
1233  {
1234  for (uint32_t i = 0; i < WORD_COUNT; ++i)
1235  if (mWords[i] != uint64_t(0))
1236  return false;
1237  return true;
1238  }
1239 
1240  /// @brief Set the specified bit on.
1241  __hostdev__ void setOn(uint32_t n) { mWords[n >> 6] |= uint64_t(1) << (n & 63); }
1242  /// @brief Set the specified bit off.
1243  __hostdev__ void setOff(uint32_t n) { mWords[n >> 6] &= ~(uint64_t(1) << (n & 63)); }
1244 
1245 #if defined(__CUDACC__) // the following functions only run on the GPU!
1246  __device__ inline void setOnAtomic(uint32_t n)
1247  {
1248  atomicOr(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), 1ull << (n & 63));
1249  }
1250  __device__ inline void setOffAtomic(uint32_t n)
1251  {
1252  atomicAnd(reinterpret_cast<unsigned long long int*>(this) + (n >> 6), ~(1ull << (n & 63)));
1253  }
1254  __device__ inline void setAtomic(uint32_t n, bool on)
1255  {
1256  on ? this->setOnAtomic(n) : this->setOffAtomic(n);
1257  }
1258 /*
1259  template<typename WordT>
1260  __device__ inline void setWordAtomic(WordT w, uint32_t n)
1261  {
1262  static_assert(util::is_same<WordT, uint8_t, uint16_t, uint32_t, uint64_t>::value);
1263  NANOVDB_ASSERT(n*8*sizeof(WordT) < WORD_COUNT);
1264  if constexpr(util::is_same<WordT,uint8_t>::value) {
1265  mask <<= x;
1266  } else if constexpr(util::is_same<WordT,uint16_t>::value) {
1267  unsigned int mask = w;
1268  if (n >> 1) mask <<= 16;
1269  atomicOr(reinterpret_cast<unsigned int*>(this) + n, mask);
1270  } else if constexpr(util::is_same<WordT,uint32_t>::value) {
1271  atomicOr(reinterpret_cast<unsigned int*>(this) + n, w);
1272  } else {
1273  atomicOr(reinterpret_cast<unsigned long long int*>(this) + n, w);
1274  }
1275  }
1276 */
1277 #endif
1278  /// @brief Set the specified bit on or off.
1279  __hostdev__ void set(uint32_t n, bool on)
1280  {
1281 #if 1 // switch between branchless
1282  auto& word = mWords[n >> 6];
1283  n &= 63;
1284  word &= ~(uint64_t(1) << n);
1285  word |= uint64_t(on) << n;
1286 #else
1287  on ? this->setOn(n) : this->setOff(n);
1288 #endif
1289  }
1290 
1291  /// @brief Set all bits on
1293  {
1294  for (uint32_t i = 0; i < WORD_COUNT; ++i)mWords[i] = ~uint64_t(0);
1295  }
1296 
1297  /// @brief Set all bits off
1299  {
1300  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = uint64_t(0);
1301  }
1302 
1303  /// @brief Set all bits off
1304  __hostdev__ void set(bool on)
1305  {
1306  const uint64_t v = on ? ~uint64_t(0) : uint64_t(0);
1307  for (uint32_t i = 0; i < WORD_COUNT; ++i) mWords[i] = v;
1308  }
1309  /// brief Toggle the state of all bits in the mask
1311  {
1312  uint32_t n = WORD_COUNT;
1313  for (auto* w = mWords; n--; ++w) *w = ~*w;
1314  }
1315  __hostdev__ void toggle(uint32_t n) { mWords[n >> 6] ^= uint64_t(1) << (n & 63); }
1316 
1317  /// @brief Bitwise intersection
1319  {
1320  uint64_t* w1 = mWords;
1321  const uint64_t* w2 = other.mWords;
1322  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= *w2;
1323  return *this;
1324  }
1325  /// @brief Bitwise union
1327  {
1328  uint64_t* w1 = mWords;
1329  const uint64_t* w2 = other.mWords;
1330  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 |= *w2;
1331  return *this;
1332  }
1333  /// @brief Bitwise difference
1335  {
1336  uint64_t* w1 = mWords;
1337  const uint64_t* w2 = other.mWords;
1338  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 &= ~*w2;
1339  return *this;
1340  }
1341  /// @brief Bitwise XOR
1343  {
1344  uint64_t* w1 = mWords;
1345  const uint64_t* w2 = other.mWords;
1346  for (uint32_t n = WORD_COUNT; n--; ++w1, ++w2) *w1 ^= *w2;
1347  return *this;
1348  }
1349 
1351  template<bool ON>
1352  __hostdev__ uint32_t findFirst() const
1353  {
1354  uint32_t n = 0u;
1355  const uint64_t* w = mWords;
1356  for (; n < WORD_COUNT && !(ON ? *w : ~*w); ++w, ++n);
1357  return n < WORD_COUNT ? (n << 6) + util::findLowestOn(ON ? *w : ~*w) : SIZE;
1358  }
1359 
1361  template<bool ON>
1362  __hostdev__ uint32_t findNext(uint32_t start) const
1363  {
1364  uint32_t n = start >> 6; // initiate
1365  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1366  uint32_t m = start & 63u;
1367  uint64_t b = ON ? mWords[n] : ~mWords[n];
1368  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1369  b &= ~uint64_t(0u) << m; // mask out lower bits
1370  while (!b && ++n < WORD_COUNT) b = ON ? mWords[n] : ~mWords[n]; // find next non-zero word
1371  return b ? (n << 6) + util::findLowestOn(b) : SIZE; // catch last word=0
1372  }
1373 
1375  template<bool ON>
1376  __hostdev__ uint32_t findPrev(uint32_t start) const
1377  {
1378  uint32_t n = start >> 6; // initiate
1379  if (n >= WORD_COUNT) return SIZE; // check for out of bounds
1380  uint32_t m = start & 63u;
1381  uint64_t b = ON ? mWords[n] : ~mWords[n];
1382  if (b & (uint64_t(1u) << m)) return start; // simple case: start is on/off
1383  b &= (uint64_t(1u) << m) - 1u; // mask out higher bits
1384  while (!b && n) b = ON ? mWords[--n] : ~mWords[--n]; // find previous non-zero word
1385  return b ? (n << 6) + util::findHighestOn(b) : SIZE; // catch first word=0
1386  }
1387 
1388 private:
1389  uint64_t mWords[WORD_COUNT];
1390 }; // Mask class
1391 
1392 // ----------------------------> Map <--------------------------------------
1393 
1394 /// @brief Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation
1395 struct Map
1396 { // 264B (not 32B aligned!)
1397  float mMatF[9]; // 9*4B <- 3x3 matrix
1398  float mInvMatF[9]; // 9*4B <- 3x3 matrix
1399  float mVecF[3]; // 3*4B <- translation
1400  float mTaperF; // 4B, placeholder for taper value
1401  double mMatD[9]; // 9*8B <- 3x3 matrix
1402  double mInvMatD[9]; // 9*8B <- 3x3 matrix
1403  double mVecD[3]; // 3*8B <- translation
1404  double mTaperD; // 8B, placeholder for taper value
1405 
1406  /// @brief Default constructor for the identity map
1408  : mMatF{ 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1409  , mInvMatF{1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f}
1410  , mVecF{0.0f, 0.0f, 0.0f}
1411  , mTaperF{1.0f}
1412  , mMatD{ 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1413  , mInvMatD{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
1414  , mVecD{0.0, 0.0, 0.0}
1415  , mTaperD{1.0}
1416  {
1417  }
1418  __hostdev__ Map(double s, const Vec3d& t = Vec3d(0.0, 0.0, 0.0))
1419  : mMatF{float(s), 0.0f, 0.0f, 0.0f, float(s), 0.0f, 0.0f, 0.0f, float(s)}
1420  , mInvMatF{1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s), 0.0f, 0.0f, 0.0f, 1.0f / float(s)}
1421  , mVecF{float(t[0]), float(t[1]), float(t[2])}
1422  , mTaperF{1.0f}
1423  , mMatD{s, 0.0, 0.0, 0.0, s, 0.0, 0.0, 0.0, s}
1424  , mInvMatD{1.0 / s, 0.0, 0.0, 0.0, 1.0 / s, 0.0, 0.0, 0.0, 1.0 / s}
1425  , mVecD{t[0], t[1], t[2]}
1426  , mTaperD{1.0}
1427  {
1428  }
1429 
1430  /// @brief Initialize the member data from 3x3 or 4x4 matrices
1431  /// @note This is not _hostdev__ since then MatT=openvdb::Mat4d will produce warnings
1432  template<typename MatT, typename Vec3T>
1433  void set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper = 1.0);
1434 
1435  /// @brief Initialize the member data from 4x4 matrices
1436  /// @note The last (4th) row of invMat is actually ignored.
1437  /// This is not _hostdev__ since then Mat4T=openvdb::Mat4d will produce warnings
1438  template<typename Mat4T>
1439  void set(const Mat4T& mat, const Mat4T& invMat, double taper = 1.0) { this->set(mat, invMat, mat[3], taper); }
1440 
1441  template<typename Vec3T>
1442  void set(double scale, const Vec3T& translation, double taper = 1.0);
1443 
1444  /// @brief Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
1445  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1446  /// @tparam Vec3T Template type of the 3D vector to be mapped
1447  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1448  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1449  template<typename Vec3T>
1450  __hostdev__ Vec3T applyMap(const Vec3T& ijk) const { return math::matMult(mMatD, mVecD, ijk); }
1451 
1452  /// @brief Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
1453  /// @note Typically this operation is used for the scale, rotation and translation of index -> world mapping
1454  /// @tparam Vec3T Template type of the 3D vector to be mapped
1455  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1456  /// @return Forward mapping for affine transformation, i.e. (mat x ijk) + translation
1457  template<typename Vec3T>
1458  __hostdev__ Vec3T applyMapF(const Vec3T& ijk) const { return math::matMult(mMatF, mVecF, ijk); }
1459 
1460  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1461  /// e.g. scale and rotation WITHOUT translation.
1462  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1463  /// @tparam Vec3T Template type of the 3D vector to be mapped
1464  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1465  /// @return linear forward 3x3 mapping of the input vector
1466  template<typename Vec3T>
1467  __hostdev__ Vec3T applyJacobian(const Vec3T& ijk) const { return math::matMult(mMatD, ijk); }
1468 
1469  /// @brief Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1470  /// e.g. scale and rotation WITHOUT translation.
1471  /// @note Typically this operation is used for scale and rotation from index -> world mapping
1472  /// @tparam Vec3T Template type of the 3D vector to be mapped
1473  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1474  /// @return linear forward 3x3 mapping of the input vector
1475  template<typename Vec3T>
1476  __hostdev__ Vec3T applyJacobianF(const Vec3T& ijk) const { return math::matMult(mMatF, ijk); }
1477 
1478  /// @brief Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
1479  /// @note Typically this operation is used for the world -> index mapping
1480  /// @tparam Vec3T Template type of the 3D vector to be mapped
1481  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1482  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1483  template<typename Vec3T>
1484  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const
1485  {
1486  return math::matMult(mInvMatD, Vec3T(xyz[0] - mVecD[0], xyz[1] - mVecD[1], xyz[2] - mVecD[2]));
1487  }
1488 
1489  /// @brief Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
1490  /// @note Typically this operation is used for the world -> index mapping
1491  /// @tparam Vec3T Template type of the 3D vector to be mapped
1492  /// @param xyz 3D vector to be mapped - typically floating point world coordinates
1493  /// @return Inverse affine mapping of the input @c xyz i.e. (xyz - translation) x mat^-1
1494  template<typename Vec3T>
1495  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const
1496  {
1497  return math::matMult(mInvMatF, Vec3T(xyz[0] - mVecF[0], xyz[1] - mVecF[1], xyz[2] - mVecF[2]));
1498  }
1499 
1500  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1501  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1502  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1503  /// @tparam Vec3T Template type of the 3D vector to be mapped
1504  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1505  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1506  template<typename Vec3T>
1507  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return math::matMult(mInvMatD, xyz); }
1508 
1509  /// @brief Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmetics,
1510  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1511  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1512  /// @tparam Vec3T Template type of the 3D vector to be mapped
1513  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1514  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1515  template<typename Vec3T>
1516  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return math::matMult(mInvMatF, xyz); }
1517 
1518  /// @brief Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmetics,
1519  /// e.g. inverse scale and inverse rotation WITHOUT translation.
1520  /// @note Typically this operation is used for scale and rotation from world -> index mapping
1521  /// @tparam Vec3T Template type of the 3D vector to be mapped
1522  /// @param ijk 3D vector to be mapped - typically floating point index coordinates
1523  /// @return linear inverse 3x3 mapping of the input vector i.e. xyz x mat^-1
1524  template<typename Vec3T>
1525  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return math::matMultT(mInvMatD, xyz); }
1526  template<typename Vec3T>
1527  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return math::matMultT(mInvMatF, xyz); }
1528 
1529  /// @brief Return a voxels size in each coordinate direction, measured at the origin
1530  __hostdev__ Vec3d getVoxelSize() const { return this->applyMap(Vec3d(1)) - this->applyMap(Vec3d(0)); }
1531 }; // Map
1532 
1533 template<typename MatT, typename Vec3T>
1534 inline void Map::set(const MatT& mat, const MatT& invMat, const Vec3T& translate, double taper)
1535 {
1536  float * mf = mMatF, *vf = mVecF, *mif = mInvMatF;
1537  double *md = mMatD, *vd = mVecD, *mid = mInvMatD;
1538  mTaperF = static_cast<float>(taper);
1539  mTaperD = taper;
1540  for (int i = 0; i < 3; ++i) {
1541  *vd++ = translate[i]; //translation
1542  *vf++ = static_cast<float>(translate[i]); //translation
1543  for (int j = 0; j < 3; ++j) {
1544  *md++ = mat[j][i]; //transposed
1545  *mid++ = invMat[j][i];
1546  *mf++ = static_cast<float>(mat[j][i]); //transposed
1547  *mif++ = static_cast<float>(invMat[j][i]);
1548  }
1549  }
1550 }
1551 
1552 template<typename Vec3T>
1553 inline void Map::set(double dx, const Vec3T& trans, double taper)
1554 {
1555  NANOVDB_ASSERT(dx > 0.0);
1556  const double mat[3][3] = { {dx, 0.0, 0.0}, // row 0
1557  {0.0, dx, 0.0}, // row 1
1558  {0.0, 0.0, dx} }; // row 2
1559  const double idx = 1.0 / dx;
1560  const double invMat[3][3] = { {idx, 0.0, 0.0}, // row 0
1561  {0.0, idx, 0.0}, // row 1
1562  {0.0, 0.0, idx} }; // row 2
1563  this->set(mat, invMat, trans, taper);
1564 }
1565 
1566 // ----------------------------> GridBlindMetaData <--------------------------------------
1567 
1568 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridBlindMetaData
1569 { // 288 bytes
1570  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less!
1571  int64_t mDataOffset; // byte offset to the blind data, relative to GridBlindMetaData::this.
1572  uint64_t mValueCount; // number of blind values, e.g. point count
1573  uint32_t mValueSize;// byte size of each value, e.g. 4 if mDataType=Float and 1 if mDataType=Unknown since that amounts to char
1574  GridBlindDataSemantic mSemantic; // semantic meaning of the data.
1576  GridType mDataType; // 4 bytes
1577  char mName[MaxNameSize]; // note this includes the NULL termination
1578  // no padding required for 32 byte alignment
1579 
1580  /// @brief Empty constructor
1582  : mDataOffset(0)
1583  , mValueCount(0)
1584  , mValueSize(0)
1585  , mSemantic(GridBlindDataSemantic::Unknown)
1586  , mDataClass(GridBlindDataClass::Unknown)
1587  , mDataType(GridType::Unknown)
1588  {
1589  util::memzero(mName, MaxNameSize);
1590  }
1591 
1592  GridBlindMetaData(int64_t dataOffset, uint64_t valueCount, uint32_t valueSize, GridBlindDataSemantic semantic, GridBlindDataClass dataClass, GridType dataType)
1593  : mDataOffset(dataOffset)
1594  , mValueCount(valueCount)
1595  , mValueSize(valueSize)
1596  , mSemantic(semantic)
1597  , mDataClass(dataClass)
1598  , mDataType(dataType)
1599  {
1600  util::memzero(mName, MaxNameSize);
1601  }
1602 
1603  /// @brief Copy constructor that resets mDataOffset and zeros out mName
1605  : mDataOffset(util::PtrDiff(util::PtrAdd(&other, other.mDataOffset), this))
1606  , mValueCount(other.mValueCount)
1607  , mValueSize(other.mValueSize)
1608  , mSemantic(other.mSemantic)
1609  , mDataClass(other.mDataClass)
1610  , mDataType(other.mDataType)
1611  {
1612  util::strncpy(mName, other.mName, MaxNameSize);
1613  }
1614 
1615  /// @brief Copy assignment operator that resets mDataOffset and copies mName
1616  /// @param rhs right-hand instance to copy
1617  /// @return reference to itself
1619  {
1620  mDataOffset = util::PtrDiff(util::PtrAdd(&rhs, rhs.mDataOffset), this);
1621  mValueCount = rhs.mValueCount;
1622  mValueSize = rhs. mValueSize;
1623  mSemantic = rhs.mSemantic;
1624  mDataClass = rhs.mDataClass;
1625  mDataType = rhs.mDataType;
1626  util::strncpy(mName, rhs.mName, MaxNameSize);
1627  return *this;
1628  }
1629 
1630  __hostdev__ void setBlindData(const void* blindData)
1631  {
1632  mDataOffset = util::PtrDiff(blindData, this);
1633  }
1634 
1635  /// @brief Sets the name string
1636  /// @param name c-string source name
1637  /// @return returns false if @c name has too many characters
1638  __hostdev__ bool setName(const char* name){return util::strncpy(mName, name, MaxNameSize)[MaxNameSize-1] == '\0';}
1639 
1640  /// @brief returns a const void point to the blind data
1641  /// @note assumes that setBlinddData was called
1642  __hostdev__ const void* blindData() const
1643  {
1644  NANOVDB_ASSERT(mDataOffset != 0);
1645  return util::PtrAdd(this, mDataOffset);
1646  }
1647 
1648  /// @brief Get a const pointer to the blind data represented by this meta data
1649  /// @tparam BlindDataT Expected value type of the blind data.
1650  /// @return Returns NULL if mGridType!=toGridType<BlindDataT>(), else a const point of type BlindDataT.
1651  /// @note Use mDataType=Unknown if BlindDataT is a custom data type unknown to NanoVDB.
1652  template<typename BlindDataT>
1653  __hostdev__ const BlindDataT* getBlindData() const
1654  {
1655  return mDataOffset && (mDataType == toGridType<BlindDataT>()) ? util::PtrAdd<BlindDataT>(this, mDataOffset) : nullptr;
1656  }
1657 
1658  /// @brief return true if this meta data has a valid combination of semantic, class and value tags.
1659  /// @note this does not check if the mDataOffset has been set! It is intended to catch invalid combinations
1660  /// of semantic, class and value tags.
1661  __hostdev__ bool isValid() const
1662  {
1663  auto check = [&]()->bool{
1664  switch (mDataType){
1665  //case GridType::Unknown: return mValueSize==1u;// i.e. we encode data as mValueCount chars
1666  case GridType::Float: return mValueSize==4u;
1667  case GridType::Double: return mValueSize==8u;
1668  case GridType::Int16: return mValueSize==2u;
1669  case GridType::Int32: return mValueSize==4u;
1670  case GridType::Int64: return mValueSize==8u;
1671  case GridType::Vec3f: return mValueSize==12u;
1672  case GridType::Vec3d: return mValueSize==24u;
1673  case GridType::Half: return mValueSize==2u;
1674  case GridType::RGBA8: return mValueSize==4u;
1675  case GridType::Fp8: return mValueSize==1u;
1676  case GridType::Fp16: return mValueSize==2u;
1677  case GridType::Vec4f: return mValueSize==16u;
1678  case GridType::Vec4d: return mValueSize==32u;
1679  case GridType::Vec3u8: return mValueSize==3u;
1680  case GridType::Vec3u16: return mValueSize==6u;
1681  default: return true;}// all other combinations are valid
1682  };
1683  //if (!check()) {
1684  // char str[20];
1685  // printf("Inconsistent blind data properties: size=%u, GridType=\"%s\"\n",(uint32_t)mValueSize, toStr(str, mDataType) );
1686  //}
1687  return nanovdb::isValid(mDataClass, mSemantic, mDataType) && check();
1688  }
1689 
1690  /// @brief return size in bytes of the blind data represented by this blind meta data
1691  /// @note This size includes possible padding for 32 byte alignment. The actual amount
1692  /// of bind data is mValueCount * mValueSize
1693  __hostdev__ uint64_t blindDataSize() const
1694  {
1695  return math::AlignUp<NANOVDB_DATA_ALIGNMENT>(mValueCount * mValueSize);
1696  }
1697 }; // GridBlindMetaData
1698 
1699 // ----------------------------> NodeTrait <--------------------------------------
1700 
1701 /// @brief Struct to derive node type from its level in a given
1702 /// grid, tree or root while preserving constness
1703 template<typename GridOrTreeOrRootT, int LEVEL>
1704 struct NodeTrait;
1705 
1706 // Partial template specialization of above Node struct
1707 template<typename GridOrTreeOrRootT>
1708 struct NodeTrait<GridOrTreeOrRootT, 0>
1709 {
1710  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1711  using Type = typename GridOrTreeOrRootT::LeafNodeType;
1712  using type = typename GridOrTreeOrRootT::LeafNodeType;
1713 };
1714 template<typename GridOrTreeOrRootT>
1715 struct NodeTrait<const GridOrTreeOrRootT, 0>
1716 {
1717  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1718  using Type = const typename GridOrTreeOrRootT::LeafNodeType;
1719  using type = const typename GridOrTreeOrRootT::LeafNodeType;
1720 };
1721 
1722 template<typename GridOrTreeOrRootT>
1723 struct NodeTrait<GridOrTreeOrRootT, 1>
1724 {
1725  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1726  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1727  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1728 };
1729 template<typename GridOrTreeOrRootT>
1730 struct NodeTrait<const GridOrTreeOrRootT, 1>
1731 {
1732  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1733  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1734  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType;
1735 };
1736 template<typename GridOrTreeOrRootT>
1737 struct NodeTrait<GridOrTreeOrRootT, 2>
1738 {
1739  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1740  using Type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1741  using type = typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1742 };
1743 template<typename GridOrTreeOrRootT>
1744 struct NodeTrait<const GridOrTreeOrRootT, 2>
1745 {
1746  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1747  using Type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1748  using type = const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType;
1749 };
1750 template<typename GridOrTreeOrRootT>
1751 struct NodeTrait<GridOrTreeOrRootT, 3>
1752 {
1753  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1754  using Type = typename GridOrTreeOrRootT::RootNodeType;
1755  using type = typename GridOrTreeOrRootT::RootNodeType;
1756 };
1757 
1758 template<typename GridOrTreeOrRootT>
1759 struct NodeTrait<const GridOrTreeOrRootT, 3>
1760 {
1761  static_assert(GridOrTreeOrRootT::RootNodeType::LEVEL == 3, "Tree depth is not supported");
1762  using Type = const typename GridOrTreeOrRootT::RootNodeType;
1763  using type = const typename GridOrTreeOrRootT::RootNodeType;
1764 };
1765 
1766 template<typename GridOrTreeOrRootT, int LEVEL>
1768 
1769 // ------------> Froward decelerations of accelerated random access methods <---------------
1770 
1771 template<typename BuildT>
1772 struct GetValue;
1773 template<typename BuildT>
1774 struct SetValue;
1775 template<typename BuildT>
1776 struct SetVoxel;
1777 template<typename BuildT>
1778 struct GetState;
1779 template<typename BuildT>
1780 struct GetDim;
1781 template<typename BuildT>
1782 struct GetLeaf;
1783 template<typename BuildT>
1784 struct ProbeValue;
1785 template<typename BuildT>
1787 
1788 // ----------------------------> CheckMode <----------------------------------
1789 
1790 /// @brief List of different modes for computing for a checksum
1791 enum class CheckMode : uint32_t { Disable = 0, // no computation
1792  Empty = 0,
1793  Half = 1,
1794  Partial = 1, // fast but approximate
1795  Default = 1, // defaults to Partial
1796  Full = 2, // slow but accurate
1797  End = 3, // marks the end of the enum list
1798  StrLen = 9 + End};
1799 
1800 /// @brief Prints CheckMode enum to a c-string
1801 /// @param dst Destination c-string
1802 /// @param mode CheckMode enum to be converted to string
1803 /// @return destinations string @c dst
1804 __hostdev__ inline char* toStr(char *dst, CheckMode mode)
1805 {
1806  switch (mode){
1807  case CheckMode::Half: return util::strcpy(dst, "half");
1808  case CheckMode::Full: return util::strcpy(dst, "full");
1809  default: return util::strcpy(dst, "disabled");// StrLen = 8 + 1 + End
1810  }
1811 }
1812 
1813 // ----------------------------> Checksum <----------------------------------
1814 
1815 /// @brief Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data
1816 /// and one for the remaining grid nodes.
1818 {
1819  /// Three types of checksums:
1820  /// 1) Empty: all 64 bits are on (used to signify a disabled or undefined checksum)
1821  /// 2) Half: Upper 32 bits are on and not all of lower 32 bits are on (lower 32 bits checksum head of grid)
1822  /// 3) Full: Not all of the 64 bits are one (lower 32 bits checksum head of grid and upper 32 bits checksum tail of grid)
1823  union { uint32_t mCRC32[2]; uint64_t mCRC64; };// mCRC32[0] is checksum of Grid, Tree and Root, and mCRC32[1] is checksum of nodes
1824 
1825 public:
1826 
1827  static constexpr uint32_t EMPTY32 = ~uint32_t{0};
1828  static constexpr uint64_t EMPTY64 = ~uint64_t(0);
1829 
1830  /// @brief default constructor initiates checksum to EMPTY
1831  __hostdev__ Checksum() : mCRC64{EMPTY64} {}
1832 
1833  /// @brief Constructor that allows the two 32bit checksums to be initiated explicitly
1834  /// @param head Initial 32bit CRC checksum of grid, tree and root data
1835  /// @param tail Initial 32bit CRC checksum of all the nodes and blind data
1836  __hostdev__ Checksum(uint32_t head, uint32_t tail) : mCRC32{head, tail} {}
1837 
1838  /// @brief
1839  /// @param checksum
1840  /// @param mode
1841  __hostdev__ Checksum(uint64_t checksum, CheckMode mode = CheckMode::Full) : mCRC64{mode == CheckMode::Disable ? EMPTY64 : checksum}
1842  {
1843  if (mode == CheckMode::Partial) mCRC32[1] = EMPTY32;
1844  }
1845 
1846  /// @brief return the 64 bit checksum of this instance
1847  [[deprecated("Use Checksum::data instead.")]]
1848  __hostdev__ uint64_t checksum() const { return mCRC64; }
1849  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1850  __hostdev__ uint32_t& checksum(int i) {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1851  [[deprecated("Use Checksum::head and Ckecksum::tail instead.")]]
1852  __hostdev__ uint32_t checksum(int i) const {NANOVDB_ASSERT(i==0 || i==1); return mCRC32[i]; }
1853 
1854  __hostdev__ uint64_t full() const { return mCRC64; }
1855  __hostdev__ uint64_t& full() { return mCRC64; }
1856  __hostdev__ uint32_t head() const { return mCRC32[0]; }
1857  __hostdev__ uint32_t& head() { return mCRC32[0]; }
1858  __hostdev__ uint32_t tail() const { return mCRC32[1]; }
1859  __hostdev__ uint32_t& tail() { return mCRC32[1]; }
1860 
1861  /// @brief return true if the 64 bit checksum is partial, i.e. of head only
1862  [[deprecated("Use Checksum::isHalf instead.")]]
1863  __hostdev__ bool isPartial() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1864  __hostdev__ bool isHalf() const { return mCRC32[0] != EMPTY32 && mCRC32[1] == EMPTY32; }
1865 
1866  /// @brief return true if the 64 bit checksum is fill, i.e. of both had and nodes
1867  __hostdev__ bool isFull() const { return mCRC64 != EMPTY64 && mCRC32[1] != EMPTY32; }
1868 
1869  /// @brief return true if the 64 bit checksum is disables (unset)
1870  __hostdev__ bool isEmpty() const { return mCRC64 == EMPTY64; }
1871 
1872  __hostdev__ void disable() { mCRC64 = EMPTY64; }
1873 
1874  /// @brief return the mode of the 64 bit checksum
1876  {
1877  return mCRC64 == EMPTY64 ? CheckMode::Disable :
1878  mCRC32[1] == EMPTY32 ? CheckMode::Partial : CheckMode::Full;
1879  }
1880 
1881  /// @brief return true if the checksums are identical
1882  /// @param rhs other Checksum
1883  __hostdev__ bool operator==(const Checksum &rhs) const {return mCRC64 == rhs.mCRC64;}
1884 
1885  /// @brief return true if the checksums are not identical
1886  /// @param rhs other Checksum
1887  __hostdev__ bool operator!=(const Checksum &rhs) const {return mCRC64 != rhs.mCRC64;}
1888 };// Checksum
1889 
1890 /// @brief Maps 64 bit checksum to CheckMode enum
1891 /// @param checksum 64 bit checksum with two CRC32 codes
1892 /// @return CheckMode enum
1893 __hostdev__ inline CheckMode toCheckMode(const Checksum &checksum){return checksum.mode();}
1894 
1895 // ----------------------------> Grid <--------------------------------------
1896 
1897 /*
1898  The following class and comment is for internal use only
1899 
1900  Memory layout:
1901 
1902  Grid -> 39 x double (world bbox and affine transformation)
1903  Tree -> Root 3 x ValueType + int32_t + N x Tiles (background,min,max,tileCount + tileCount x Tiles)
1904 
1905  N2 upper InternalNodes each with 2 bit masks, N2 tiles, and min/max values
1906 
1907  N1 lower InternalNodes each with 2 bit masks, N1 tiles, and min/max values
1908 
1909  N0 LeafNodes each with a bit mask, N0 ValueTypes and min/max
1910 
1911  Example layout: ("---" implies it has a custom offset, "..." implies zero or more)
1912  [GridData][TreeData]---[RootData][ROOT TILES...]---[InternalData<5>]---[InternalData<4>]---[LeafData<3>]---[BLINDMETA...]---[BLIND0]---[BLIND1]---etc.
1913 */
1914 
1915 /// @brief Struct with all the member data of the Grid (useful during serialization of an openvdb grid)
1916 ///
1917 /// @note The transform is assumed to be affine (so linear) and have uniform scale! So frustum transforms
1918 /// and non-uniform scaling are not supported (primarily because they complicate ray-tracing in index space)
1919 ///
1920 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
1921 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) GridData
1922 { // sizeof(GridData) = 672B
1923  static const int MaxNameSize = 256; // due to NULL termination the maximum length is one less
1924  uint64_t mMagic; // 8B (0) magic to validate it is valid grid data.
1925  Checksum mChecksum; // 8B (8). Checksum of grid buffer.
1926  Version mVersion; // 4B (16) major, minor, and patch version numbers
1927  BitFlags<32> mFlags; // 4B (20). flags for grid.
1928  uint32_t mGridIndex; // 4B (24). Index of this grid in the buffer
1929  uint32_t mGridCount; // 4B (28). Total number of grids in the buffer
1930  uint64_t mGridSize; // 8B (32). byte count of this entire grid occupied in the buffer.
1931  char mGridName[MaxNameSize]; // 256B (40)
1932  Map mMap; // 264B (296). affine transformation between index and world space in both single and double precision
1933  Vec3dBBox mWorldBBox; // 48B (560). floating-point AABB of active values in WORLD SPACE (2 x 3 doubles)
1934  Vec3d mVoxelSize; // 24B (608). size of a voxel in world units
1935  GridClass mGridClass; // 4B (632).
1936  GridType mGridType; // 4B (636).
1937  int64_t mBlindMetadataOffset; // 8B (640). offset to beginning of GridBlindMetaData structures that follow this grid.
1938  uint32_t mBlindMetadataCount; // 4B (648). count of GridBlindMetaData structures that follow this grid.
1939  uint32_t mData0; // 4B (652) unused
1940  uint64_t mData1; // 8B (656) is use for the total number of values indexed by an IndexGrid
1941  uint64_t mData2; // 8B (664) padding to 32 B alignment
1942  /// @brief Use this method to initiate most member data
1943  GridData& operator=(const GridData&) = default;
1944  //__hostdev__ GridData& operator=(const GridData& other){return *util::memcpy(this, &other);}
1945  __hostdev__ void init(std::initializer_list<GridFlags> list = {GridFlags::IsBreadthFirst},
1946  uint64_t gridSize = 0u,
1947  const Map& map = Map(),
1948  GridType gridType = GridType::Unknown,
1949  GridClass gridClass = GridClass::Unknown)
1950  {
1951 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1952  mMagic = NANOVDB_MAGIC_GRID;
1953 #else
1954  mMagic = NANOVDB_MAGIC_NUMB;
1955 #endif
1956  mChecksum.disable();// all 64 bits ON means checksum is disabled
1957  mVersion = Version();
1958  mFlags.initMask(list);
1959  mGridIndex = 0u;
1960  mGridCount = 1u;
1961  mGridSize = gridSize;
1962  mGridName[0] = '\0';
1963  mMap = map;
1964  mWorldBBox = Vec3dBBox();// invalid bbox
1965  mVoxelSize = map.getVoxelSize();
1966  mGridClass = gridClass;
1967  mGridType = gridType;
1968  mBlindMetadataOffset = mGridSize; // i.e. no blind data
1969  mBlindMetadataCount = 0u; // i.e. no blind data
1970  mData0 = 0u; // zero padding
1971  mData1 = 0u; // only used for index and point grids
1972 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
1973  mData2 = 0u;// unused
1974 #else
1975  mData2 = NANOVDB_MAGIC_GRID; // since version 32.6.0 (will change in the future)
1976 #endif
1977  }
1978  /// @brief return true if the magic number and the version are both valid
1979  __hostdev__ bool isValid() const {
1980  // Before v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and mData2 was undefined
1981  // For v32.6.0: toMagic(mMagic) = MagicType::NanoVDB and toMagic(mData2) = MagicType::NanoGrid
1982  // After v32.7.X: toMagic(mMagic) = MagicType::NanoGrid and mData2 will again be undefined
1983  const MagicType magic = toMagic(mMagic);
1984  if (magic == MagicType::NanoGrid || toMagic(mData2) == MagicType::NanoGrid) return true;
1985  bool test = magic == MagicType::NanoVDB;// could be GridData or io::FileHeader
1986  if (test) test = mVersion.isCompatible();
1987  if (test) test = mGridCount > 0u && mGridIndex < mGridCount;
1988  if (test) test = mGridClass < GridClass::End && mGridType < GridType::End;
1989  return test;
1990  }
1991  // Set and unset various bit flags
1992  __hostdev__ void setMinMaxOn(bool on = true) { mFlags.setMask(GridFlags::HasMinMax, on); }
1993  __hostdev__ void setBBoxOn(bool on = true) { mFlags.setMask(GridFlags::HasBBox, on); }
1994  __hostdev__ void setLongGridNameOn(bool on = true) { mFlags.setMask(GridFlags::HasLongGridName, on); }
1995  __hostdev__ void setAverageOn(bool on = true) { mFlags.setMask(GridFlags::HasAverage, on); }
1996  __hostdev__ void setStdDeviationOn(bool on = true) { mFlags.setMask(GridFlags::HasStdDeviation, on); }
1997  __hostdev__ bool setGridName(const char* src)
1998  {
1999  const bool success = (util::strncpy(mGridName, src, MaxNameSize)[MaxNameSize-1] == '\0');
2000  if (!success) mGridName[MaxNameSize-1] = '\0';
2001  return success; // returns true if input grid name is NOT longer than MaxNameSize characters
2002  }
2003  // Affine transformations based on double precision
2004  template<typename Vec3T>
2005  __hostdev__ Vec3T applyMap(const Vec3T& xyz) const { return mMap.applyMap(xyz); } // Pos: index -> world
2006  template<typename Vec3T>
2007  __hostdev__ Vec3T applyInverseMap(const Vec3T& xyz) const { return mMap.applyInverseMap(xyz); } // Pos: world -> index
2008  template<typename Vec3T>
2009  __hostdev__ Vec3T applyJacobian(const Vec3T& xyz) const { return mMap.applyJacobian(xyz); } // Dir: index -> world
2010  template<typename Vec3T>
2011  __hostdev__ Vec3T applyInverseJacobian(const Vec3T& xyz) const { return mMap.applyInverseJacobian(xyz); } // Dir: world -> index
2012  template<typename Vec3T>
2013  __hostdev__ Vec3T applyIJT(const Vec3T& xyz) const { return mMap.applyIJT(xyz); }
2014  // Affine transformations based on single precision
2015  template<typename Vec3T>
2016  __hostdev__ Vec3T applyMapF(const Vec3T& xyz) const { return mMap.applyMapF(xyz); } // Pos: index -> world
2017  template<typename Vec3T>
2018  __hostdev__ Vec3T applyInverseMapF(const Vec3T& xyz) const { return mMap.applyInverseMapF(xyz); } // Pos: world -> index
2019  template<typename Vec3T>
2020  __hostdev__ Vec3T applyJacobianF(const Vec3T& xyz) const { return mMap.applyJacobianF(xyz); } // Dir: index -> world
2021  template<typename Vec3T>
2022  __hostdev__ Vec3T applyInverseJacobianF(const Vec3T& xyz) const { return mMap.applyInverseJacobianF(xyz); } // Dir: world -> index
2023  template<typename Vec3T>
2024  __hostdev__ Vec3T applyIJTF(const Vec3T& xyz) const { return mMap.applyIJTF(xyz); }
2025 
2026  // @brief Return a non-const void pointer to the tree
2027  __hostdev__ void* treePtr() { return this + 1; }// TreeData is always right after GridData
2028 
2029  // @brief Return a const void pointer to the tree
2030  __hostdev__ const void* treePtr() const { return this + 1; }// TreeData is always right after GridData
2031 
2032  /// @brief Return a non-const void pointer to the first node at @c LEVEL
2033  /// @tparam LEVEL Level of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
2034  template <uint32_t LEVEL>
2035  __hostdev__ const void* nodePtr() const
2036  {
2037  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
2038  const void *treeData = this + 1;// TreeData is always right after GridData
2039  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
2040  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
2041  }
2042 
2043  /// @brief Return a non-const void pointer to the first node at @c LEVEL
2044  /// @tparam LEVEL of the node. LEVEL 0 means leaf node and LEVEL 3 means root node
2045  /// @warning If not nodes exist at @c LEVEL NULL is returned
2046  template <uint32_t LEVEL>
2048  {
2049  static_assert(LEVEL >= 0 && LEVEL <= 3, "invalid LEVEL template parameter");
2050  void *treeData = this + 1;// TreeData is always right after GridData
2051  const uint64_t nodeOffset = *util::PtrAdd<uint64_t>(treeData, 8*LEVEL);// skip LEVEL uint64_t
2052  return nodeOffset ? util::PtrAdd(treeData, nodeOffset) : nullptr;
2053  }
2054 
2055  /// @brief Return number of nodes at @c LEVEL
2056  /// @tparam Level of the node. LEVEL 0 means leaf node and LEVEL 2 means upper node
2057  template <uint32_t LEVEL>
2058  __hostdev__ uint32_t nodeCount() const
2059  {
2060  static_assert(LEVEL >= 0 && LEVEL < 3, "invalid LEVEL template parameter");
2061  return *util::PtrAdd<uint32_t>(this + 1, 4*(8 + LEVEL));// TreeData is always right after GridData
2062  }
2063 
2064  /// @brief Returns a const reference to the blindMetaData at the specified linear offset.
2065  ///
2066  /// @warning The linear offset is assumed to be in the valid range
2068  {
2069  NANOVDB_ASSERT(n < mBlindMetadataCount);
2070  return util::PtrAdd<GridBlindMetaData>(this, mBlindMetadataOffset) + n;
2071  }
2072 
2073  __hostdev__ const char* gridName() const
2074  {
2075  if (mFlags.isMaskOn(GridFlags::HasLongGridName)) {// search for first blind meta data that contains a name
2076  NANOVDB_ASSERT(mBlindMetadataCount > 0);
2077  for (uint32_t i = 0; i < mBlindMetadataCount; ++i) {
2078  const auto* metaData = this->blindMetaData(i);// EXTREMELY important to be a pointer
2079  if (metaData->mDataClass == GridBlindDataClass::GridName) {
2080  NANOVDB_ASSERT(metaData->mDataType == GridType::Unknown);
2081  return metaData->template getBlindData<const char>();
2082  }
2083  }
2084  NANOVDB_ASSERT(false); // should never hit this!
2085  }
2086  return mGridName;
2087  }
2088 
2089  /// @brief Return memory usage in bytes for this class only.
2090  __hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2091 
2092  /// @brief return AABB of active values in world space
2093  __hostdev__ const Vec3dBBox& worldBBox() const { return mWorldBBox; }
2094 
2095  /// @brief return AABB of active values in index space
2096  __hostdev__ const CoordBBox& indexBBox() const {return *(const CoordBBox*)(this->nodePtr<3>());}
2097 
2098  /// @brief return the root table has size
2099  __hostdev__ uint32_t rootTableSize() const
2100  {
2101  const void *root = this->nodePtr<3>();
2102  return root ? *util::PtrAdd<uint32_t>(root, sizeof(CoordBBox)) : 0u;
2103  }
2104 
2105  /// @brief test if the grid is empty, e.i the root table has size 0
2106  /// @return true if this grid contains not data whatsoever
2107  __hostdev__ bool isEmpty() const {return this->rootTableSize() == 0u;}
2108 
2109  /// @brief return true if RootData follows TreeData in memory without any extra padding
2110  /// @details TreeData is always following right after GridData, but the same might not be true for RootData
2111  __hostdev__ bool isRootConnected() const { return *(const uint64_t*)((const char*)(this + 1) + 24) == 64u;}
2112 }; // GridData
2113 
2114 // Forward declaration of accelerated random access class
2115 template<typename BuildT, int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1>
2117 
2118 template<typename BuildT>
2120 
2121 /// @brief Highest level of the data structure. Contains a tree and a world->index
2122 /// transform (that currently only supports uniform scaling and translation).
2123 ///
2124 /// @note This the API of this class to interface with client code
2125 template<typename TreeT>
2126 class Grid : public GridData
2127 {
2128 public:
2129  using TreeType = TreeT;
2130  using RootType = typename TreeT::RootType;
2132  using UpperNodeType = typename RootNodeType::ChildNodeType;
2133  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2134  using LeafNodeType = typename RootType::LeafNodeType;
2136  using ValueType = typename TreeT::ValueType;
2137  using BuildType = typename TreeT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2138  using CoordType = typename TreeT::CoordType;
2140 
2141  /// @brief Disallow constructions, copy and assignment
2142  ///
2143  /// @note Only a Serializer, defined elsewhere, can instantiate this class
2144  Grid(const Grid&) = delete;
2145  Grid& operator=(const Grid&) = delete;
2146  ~Grid() = delete;
2147 
2148  __hostdev__ Version version() const { return DataType::mVersion; }
2149 
2150  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2151 
2152  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2153 
2154  /// @brief Return memory usage in bytes for this class only.
2155  //__hostdev__ static uint64_t memUsage() { return sizeof(GridData); }
2156 
2157  /// @brief Return the memory footprint of the entire grid, i.e. including all nodes and blind data
2158  __hostdev__ uint64_t gridSize() const { return DataType::mGridSize; }
2159 
2160  /// @brief Return index of this grid in the buffer
2161  __hostdev__ uint32_t gridIndex() const { return DataType::mGridIndex; }
2162 
2163  /// @brief Return total number of grids in the buffer
2164  __hostdev__ uint32_t gridCount() const { return DataType::mGridCount; }
2165 
2166  /// @brief @brief Return the total number of values indexed by this IndexGrid
2167  ///
2168  /// @note This method is only defined for IndexGrid = NanoGrid<ValueIndex || ValueOnIndex >
2169  template<typename T = BuildType>
2170  __hostdev__ typename util::enable_if<BuildTraits<T>::is_index, const uint64_t&>::type
2171  valueCount() const { return DataType::mData1; }
2172 
2173  /// @brief @brief Return the total number of points indexed by this PointGrid
2174  ///
2175  /// @note This method is only defined for PointGrid = NanoGrid<Point>
2176  template<typename T = BuildType>
2177  __hostdev__ typename util::enable_if<util::is_same<T, Point>::value, const uint64_t&>::type
2178  pointCount() const { return DataType::mData1; }
2179 
2180  /// @brief Return a const reference to the tree
2181  __hostdev__ const TreeT& tree() const { return *reinterpret_cast<const TreeT*>(this->treePtr()); }
2182 
2183  /// @brief Return a non-const reference to the tree
2184  __hostdev__ TreeT& tree() { return *reinterpret_cast<TreeT*>(this->treePtr()); }
2185 
2186  /// @brief Return a new instance of a ReadAccessor used to access values in this grid
2187  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->tree().root()); }
2188 
2189  /// @brief Return a const reference to the size of a voxel in world units
2190  __hostdev__ const Vec3d& voxelSize() const { return DataType::mVoxelSize; }
2191 
2192  /// @brief Return a const reference to the Map for this grid
2193  __hostdev__ const Map& map() const { return DataType::mMap; }
2194 
2195  /// @brief world to index space transformation
2196  template<typename Vec3T>
2197  __hostdev__ Vec3T worldToIndex(const Vec3T& xyz) const { return this->applyInverseMap(xyz); }
2198 
2199  /// @brief index to world space transformation
2200  template<typename Vec3T>
2201  __hostdev__ Vec3T indexToWorld(const Vec3T& xyz) const { return this->applyMap(xyz); }
2202 
2203  /// @brief transformation from index space direction to world space direction
2204  /// @warning assumes dir to be normalized
2205  template<typename Vec3T>
2206  __hostdev__ Vec3T indexToWorldDir(const Vec3T& dir) const { return this->applyJacobian(dir); }
2207 
2208  /// @brief transformation from world space direction to index space direction
2209  /// @warning assumes dir to be normalized
2210  template<typename Vec3T>
2211  __hostdev__ Vec3T worldToIndexDir(const Vec3T& dir) const { return this->applyInverseJacobian(dir); }
2212 
2213  /// @brief transform the gradient from index space to world space.
2214  /// @details Applies the inverse jacobian transform map.
2215  template<typename Vec3T>
2216  __hostdev__ Vec3T indexToWorldGrad(const Vec3T& grad) const { return this->applyIJT(grad); }
2217 
2218  /// @brief world to index space transformation
2219  template<typename Vec3T>
2220  __hostdev__ Vec3T worldToIndexF(const Vec3T& xyz) const { return this->applyInverseMapF(xyz); }
2221 
2222  /// @brief index to world space transformation
2223  template<typename Vec3T>
2224  __hostdev__ Vec3T indexToWorldF(const Vec3T& xyz) const { return this->applyMapF(xyz); }
2225 
2226  /// @brief transformation from index space direction to world space direction
2227  /// @warning assumes dir to be normalized
2228  template<typename Vec3T>
2229  __hostdev__ Vec3T indexToWorldDirF(const Vec3T& dir) const { return this->applyJacobianF(dir); }
2230 
2231  /// @brief transformation from world space direction to index space direction
2232  /// @warning assumes dir to be normalized
2233  template<typename Vec3T>
2234  __hostdev__ Vec3T worldToIndexDirF(const Vec3T& dir) const { return this->applyInverseJacobianF(dir); }
2235 
2236  /// @brief Transforms the gradient from index space to world space.
2237  /// @details Applies the inverse jacobian transform map.
2238  template<typename Vec3T>
2239  __hostdev__ Vec3T indexToWorldGradF(const Vec3T& grad) const { return DataType::applyIJTF(grad); }
2240 
2241  /// @brief Computes a AABB of active values in world space
2242  //__hostdev__ const Vec3dBBox& worldBBox() const { return DataType::mWorldBBox; }
2243 
2244  /// @brief Computes a AABB of active values in index space
2245  ///
2246  /// @note This method is returning a floating point bounding box and not a CoordBBox. This makes
2247  /// it more useful for clipping rays.
2248  //__hostdev__ const BBox<CoordType>& indexBBox() const { return this->tree().bbox(); }
2249 
2250  /// @brief Return the total number of active voxels in this tree.
2251  __hostdev__ uint64_t activeVoxelCount() const { return this->tree().activeVoxelCount(); }
2252 
2253  /// @brief Methods related to the classification of this grid
2254  __hostdev__ bool isValid() const { return DataType::isValid(); }
2255  __hostdev__ const GridType& gridType() const { return DataType::mGridType; }
2256  __hostdev__ const GridClass& gridClass() const { return DataType::mGridClass; }
2257  __hostdev__ bool isLevelSet() const { return DataType::mGridClass == GridClass::LevelSet; }
2258  __hostdev__ bool isFogVolume() const { return DataType::mGridClass == GridClass::FogVolume; }
2259  __hostdev__ bool isStaggered() const { return DataType::mGridClass == GridClass::Staggered; }
2260  __hostdev__ bool isPointIndex() const { return DataType::mGridClass == GridClass::PointIndex; }
2261  __hostdev__ bool isGridIndex() const { return DataType::mGridClass == GridClass::IndexGrid; }
2262  __hostdev__ bool isPointData() const { return DataType::mGridClass == GridClass::PointData; }
2263  __hostdev__ bool isMask() const { return DataType::mGridClass == GridClass::Topology; }
2264  __hostdev__ bool isUnknown() const { return DataType::mGridClass == GridClass::Unknown; }
2265  __hostdev__ bool hasMinMax() const { return DataType::mFlags.isMaskOn(GridFlags::HasMinMax); }
2266  __hostdev__ bool hasBBox() const { return DataType::mFlags.isMaskOn(GridFlags::HasBBox); }
2267  __hostdev__ bool hasLongGridName() const { return DataType::mFlags.isMaskOn(GridFlags::HasLongGridName); }
2268  __hostdev__ bool hasAverage() const { return DataType::mFlags.isMaskOn(GridFlags::HasAverage); }
2269  __hostdev__ bool hasStdDeviation() const { return DataType::mFlags.isMaskOn(GridFlags::HasStdDeviation); }
2270  __hostdev__ bool isBreadthFirst() const { return DataType::mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
2271 
2272  /// @brief return true if the specified node type is laid out breadth-first in memory and has a fixed size.
2273  /// This allows for sequential access to the nodes.
2274  template<typename NodeT>
2275  __hostdev__ bool isSequential() const { return NodeT::FIXED_SIZE && this->isBreadthFirst(); }
2276 
2277  /// @brief return true if the specified node level is laid out breadth-first in memory and has a fixed size.
2278  /// This allows for sequential access to the nodes.
2279  template<int LEVEL>
2280  __hostdev__ bool isSequential() const { return NodeTrait<TreeT, LEVEL>::type::FIXED_SIZE && this->isBreadthFirst(); }
2281 
2282  /// @brief return true if nodes at all levels can safely be accessed with simple linear offsets
2283  __hostdev__ bool isSequential() const { return UpperNodeType::FIXED_SIZE && LowerNodeType::FIXED_SIZE && LeafNodeType::FIXED_SIZE && this->isBreadthFirst(); }
2284 
2285  /// @brief Return a c-string with the name of this grid
2286  __hostdev__ const char* gridName() const { return DataType::gridName(); }
2287 
2288  /// @brief Return a c-string with the name of this grid, truncated to 255 characters
2289  __hostdev__ const char* shortGridName() const { return DataType::mGridName; }
2290 
2291  /// @brief Return checksum of the grid buffer.
2292  __hostdev__ const Checksum& checksum() const { return DataType::mChecksum; }
2293 
2294  /// @brief Return true if this grid is empty, i.e. contains no values or nodes.
2295  //__hostdev__ bool isEmpty() const { return this->tree().isEmpty(); }
2296 
2297  /// @brief Return the count of blind-data encoded in this grid
2298  __hostdev__ uint32_t blindDataCount() const { return DataType::mBlindMetadataCount; }
2299 
2300  /// @brief Return the index of the first blind data with specified name if found, otherwise -1.
2301  __hostdev__ int findBlindData(const char* name) const;
2302 
2303  /// @brief Return the index of the first blind data with specified semantic if found, otherwise -1.
2304  __hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const;
2305 
2306  /// @brief Returns a const pointer to the blindData at the specified linear offset.
2307  ///
2308  /// @warning Pointer might be NULL and the linear offset is assumed to be in the valid range
2309  // this method is deprecated !!!!
2310  [[deprecated("Use Grid::getBlindData<T>() instead.")]]
2311  __hostdev__ const void* blindData(uint32_t n) const
2312  {
2313  printf("\nnanovdb::Grid::blindData is unsafe and hence deprecated! Please use nanovdb::Grid::getBlindData instead.\n\n");
2314  NANOVDB_ASSERT(n < DataType::mBlindMetadataCount);
2315  return this->blindMetaData(n).blindData();
2316  }
2317 
2318  template <typename BlindDataT>
2319  __hostdev__ const BlindDataT* getBlindData(uint32_t n) const
2320  {
2321  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2322  return this->blindMetaData(n).template getBlindData<BlindDataT>();// NULL if mismatching BlindDataT
2323  }
2324 
2325  template <typename BlindDataT>
2326  __hostdev__ BlindDataT* getBlindData(uint32_t n)
2327  {
2328  if (n >= DataType::mBlindMetadataCount) return nullptr;// index is out of bounds
2329  return const_cast<BlindDataT*>(this->blindMetaData(n).template getBlindData<BlindDataT>());// NULL if mismatching BlindDataT
2330  }
2331 
2332  __hostdev__ const GridBlindMetaData& blindMetaData(uint32_t n) const { return *DataType::blindMetaData(n); }
2333 
2334 private:
2335  static_assert(sizeof(GridData) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(GridData) is misaligned");
2336 }; // Class Grid
2337 
2338 template<typename TreeT>
2340 {
2341  for (uint32_t i = 0, n = this->blindDataCount(); i < n; ++i) {
2342  if (this->blindMetaData(i).mSemantic == semantic)
2343  return int(i);
2344  }
2345  return -1;
2346 }
2347 
2348 template<typename TreeT>
2349 __hostdev__ int Grid<TreeT>::findBlindData(const char* name) const
2350 {
2351  auto test = [&](int n) {
2352  const char* str = this->blindMetaData(n).mName;
2353  for (int i = 0; i < GridBlindMetaData::MaxNameSize; ++i) {
2354  if (name[i] != str[i])
2355  return false;
2356  if (name[i] == '\0' && str[i] == '\0')
2357  return true;
2358  }
2359  return true; // all len characters matched
2360  };
2361  for (int i = 0, n = this->blindDataCount(); i < n; ++i)
2362  if (test(i))
2363  return i;
2364  return -1;
2365 }
2366 
2367 // ----------------------------> Tree <--------------------------------------
2368 
2369 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) TreeData
2370 { // sizeof(TreeData) == 64B
2371  int64_t mNodeOffset[4];// 32B, byte offset from this tree to first leaf, lower, upper and root node. If mNodeCount[N]=0 => mNodeOffset[N]==mNodeOffset[N+1]
2372  uint32_t mNodeCount[3]; // 12B, total number of nodes of type: leaf, lower internal, upper internal
2373  uint32_t mTileCount[3]; // 12B, total number of active tile values at the lower internal, upper internal and root node levels
2374  uint64_t mVoxelCount; // 8B, total number of active voxels in the root and all its child nodes.
2375  // No padding since it's always 32B aligned
2376  TreeData& operator=(const TreeData&) = default;
2377  __hostdev__ void setRoot(const void* root) {
2378  NANOVDB_ASSERT(root);
2379  mNodeOffset[3] = util::PtrDiff(root, this);
2380  }
2381 
2382  /// @brief Get a non-const void pointer to the root node (never NULL)
2383  __hostdev__ void* getRoot() { return util::PtrAdd(this, mNodeOffset[3]); }
2384 
2385  /// @brief Get a const void pointer to the root node (never NULL)
2386  __hostdev__ const void* getRoot() const { return util::PtrAdd(this, mNodeOffset[3]); }
2387 
2388  template<typename NodeT>
2389  __hostdev__ void setFirstNode(const NodeT* node) {mNodeOffset[NodeT::LEVEL] = (node ? util::PtrDiff(node, this) : 0);}
2390 
2391  /// @brief Return true if the root is empty, i.e. has not child nodes or constant tiles
2392  __hostdev__ bool isEmpty() const {return mNodeOffset[3] ? *util::PtrAdd<uint32_t>(this, mNodeOffset[3] + sizeof(CoordBBox)) == 0 : true;}
2393 
2394  /// @brief Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2395  __hostdev__ CoordBBox bbox() const {return mNodeOffset[3] ? *util::PtrAdd<CoordBBox>(this, mNodeOffset[3]) : CoordBBox();}
2396 
2397  /// @brief return true if RootData is layout out immediately after TreeData in memory
2398  __hostdev__ bool isRootNext() const {return mNodeOffset[3] ? mNodeOffset[3] == sizeof(TreeData) : false; }
2399 };// TreeData
2400 
2401 // ----------------------------> GridTree <--------------------------------------
2402 
2403 /// @brief defines a tree type from a grid type while preserving constness
2404 template<typename GridT>
2405 struct GridTree
2406 {
2407  using Type = typename GridT::TreeType;
2408  using type = typename GridT::TreeType;
2409 };
2410 template<typename GridT>
2411 struct GridTree<const GridT>
2412 {
2413  using Type = const typename GridT::TreeType;
2414  using type = const typename GridT::TreeType;
2415 };
2416 
2417 template<typename GridT>
2419 
2420 // ----------------------------> Tree <--------------------------------------
2421 
2422 /// @brief VDB Tree, which is a thin wrapper around a RootNode.
2423 template<typename RootT>
2424 class Tree : public TreeData
2425 {
2426  static_assert(RootT::LEVEL == 3, "Tree depth is not supported");
2427  static_assert(RootT::ChildNodeType::LOG2DIM == 5, "Tree configuration is not supported");
2428  static_assert(RootT::ChildNodeType::ChildNodeType::LOG2DIM == 4, "Tree configuration is not supported");
2429  static_assert(RootT::LeafNodeType::LOG2DIM == 3, "Tree configuration is not supported");
2430 
2431 public:
2433  using RootType = RootT;
2434  using RootNodeType = RootT;
2435  using UpperNodeType = typename RootNodeType::ChildNodeType;
2436  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2437  using LeafNodeType = typename RootType::LeafNodeType;
2438  using ValueType = typename RootT::ValueType;
2439  using BuildType = typename RootT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2440  using CoordType = typename RootT::CoordType;
2442 
2443  using Node3 = RootT;
2444  using Node2 = typename RootT::ChildNodeType;
2445  using Node1 = typename Node2::ChildNodeType;
2447 
2448  /// @brief This class cannot be constructed or deleted
2449  Tree() = delete;
2450  Tree(const Tree&) = delete;
2451  Tree& operator=(const Tree&) = delete;
2452  ~Tree() = delete;
2453 
2454  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
2455 
2456  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
2457 
2458  /// @brief return memory usage in bytes for the class
2459  __hostdev__ static uint64_t memUsage() { return sizeof(DataType); }
2460 
2461  __hostdev__ RootT& root() {return *reinterpret_cast<RootT*>(DataType::getRoot());}
2462 
2463  __hostdev__ const RootT& root() const {return *reinterpret_cast<const RootT*>(DataType::getRoot());}
2464 
2465  __hostdev__ AccessorType getAccessor() const { return AccessorType(this->root()); }
2466 
2467  /// @brief Return the value of the given voxel (regardless of state or location in the tree.)
2468  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->root().getValue(ijk); }
2469  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->root().getValue(CoordType(i, j, k)); }
2470 
2471  /// @brief Return the active state of the given voxel (regardless of state or location in the tree.)
2472  __hostdev__ bool isActive(const CoordType& ijk) const { return this->root().isActive(ijk); }
2473 
2474  /// @brief Return true if this tree is empty, i.e. contains no values or nodes
2475  //__hostdev__ bool isEmpty() const { return this->root().isEmpty(); }
2476 
2477  /// @brief Combines the previous two methods in a single call
2478  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->root().probeValue(ijk, v); }
2479 
2480  /// @brief Return a const reference to the background value.
2481  __hostdev__ const ValueType& background() const { return this->root().background(); }
2482 
2483  /// @brief Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree
2484  __hostdev__ void extrema(ValueType& min, ValueType& max) const;
2485 
2486  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
2487  //__hostdev__ const BBox<CoordType>& bbox() const { return this->root().bbox(); }
2488 
2489  /// @brief Return the total number of active voxels in this tree.
2490  __hostdev__ uint64_t activeVoxelCount() const { return DataType::mVoxelCount; }
2491 
2492  /// @brief Return the total number of active tiles at the specified level of the tree.
2493  ///
2494  /// @details level = 1,2,3 corresponds to active tile count in lower internal nodes, upper
2495  /// internal nodes, and the root level. Note active values at the leaf level are
2496  /// referred to as active voxels (see activeVoxelCount defined above).
2497  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const
2498  {
2499  NANOVDB_ASSERT(level > 0 && level <= 3); // 1, 2, or 3
2500  return DataType::mTileCount[level - 1];
2501  }
2502 
2503  template<typename NodeT>
2504  __hostdev__ uint32_t nodeCount() const
2505  {
2506  static_assert(NodeT::LEVEL < 3, "Invalid NodeT");
2507  return DataType::mNodeCount[NodeT::LEVEL];
2508  }
2509 
2510  __hostdev__ uint32_t nodeCount(int level) const
2511  {
2512  NANOVDB_ASSERT(level < 3);
2513  return DataType::mNodeCount[level];
2514  }
2515 
2516  __hostdev__ uint32_t totalNodeCount() const
2517  {
2518  return DataType::mNodeCount[0] + DataType::mNodeCount[1] + DataType::mNodeCount[2];
2519  }
2520 
2521  /// @brief return a pointer to the first node of the specified type
2522  ///
2523  /// @warning Note it may return NULL if no nodes exist
2524  template<typename NodeT>
2526  {
2527  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2528  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2529  }
2530 
2531  /// @brief return a const pointer to the first node of the specified type
2532  ///
2533  /// @warning Note it may return NULL if no nodes exist
2534  template<typename NodeT>
2535  __hostdev__ const NodeT* getFirstNode() const
2536  {
2537  const int64_t nodeOffset = DataType::mNodeOffset[NodeT::LEVEL];
2538  return nodeOffset ? util::PtrAdd<NodeT>(this, nodeOffset) : nullptr;
2539  }
2540 
2541  /// @brief return a pointer to the first node at the specified level
2542  ///
2543  /// @warning Note it may return NULL if no nodes exist
2544  template<int LEVEL>
2546  {
2547  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2548  }
2549 
2550  /// @brief return a const pointer to the first node of the specified level
2551  ///
2552  /// @warning Note it may return NULL if no nodes exist
2553  template<int LEVEL>
2555  {
2556  return this->template getFirstNode<typename NodeTrait<RootT, LEVEL>::type>();
2557  }
2558 
2559  /// @brief Template specializations of getFirstNode
2560  __hostdev__ LeafNodeType* getFirstLeaf() { return this->getFirstNode<LeafNodeType>(); }
2561  __hostdev__ const LeafNodeType* getFirstLeaf() const { return this->getFirstNode<LeafNodeType>(); }
2562  __hostdev__ typename NodeTrait<RootT, 1>::type* getFirstLower() { return this->getFirstNode<1>(); }
2563  __hostdev__ const typename NodeTrait<RootT, 1>::type* getFirstLower() const { return this->getFirstNode<1>(); }
2564  __hostdev__ typename NodeTrait<RootT, 2>::type* getFirstUpper() { return this->getFirstNode<2>(); }
2565  __hostdev__ const typename NodeTrait<RootT, 2>::type* getFirstUpper() const { return this->getFirstNode<2>(); }
2566 
2567  template<typename OpT, typename... ArgsT>
2568  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
2569  {
2570  return this->root().template get<OpT>(ijk, args...);
2571  }
2572 
2573  template<typename OpT, typename... ArgsT>
2574  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
2575  {
2576  return this->root().template set<OpT>(ijk, args...);
2577  }
2578 
2579 private:
2580  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(TreeData) is misaligned");
2581 
2582 }; // Tree class
2583 
2584 template<typename RootT>
2586 {
2587  min = this->root().minimum();
2588  max = this->root().maximum();
2589 }
2590 
2591 // --------------------------> RootData <------------------------------------
2592 
2593 /// @brief Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode)
2594 ///
2595 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
2596 template<typename ChildT>
2597 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) RootData
2598 {
2599  using ValueT = typename ChildT::ValueType;
2600  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2601  using CoordT = typename ChildT::CoordType;
2602  using StatsT = typename ChildT::FloatType;
2603  static constexpr bool FIXED_SIZE = false;
2604 
2605  /// @brief Return a key based on the coordinates of a voxel
2606 #ifdef NANOVDB_USE_SINGLE_ROOT_KEY
2607  using KeyT = uint64_t;
2608  template<typename CoordType>
2609  __hostdev__ static KeyT CoordToKey(const CoordType& ijk)
2610  {
2611  static_assert(sizeof(CoordT) == sizeof(CoordType), "Mismatching sizeof");
2612  static_assert(32 - ChildT::TOTAL <= 21, "Cannot use 64 bit root keys");
2613  return (KeyT(uint32_t(ijk[2]) >> ChildT::TOTAL)) | // z is the lower 21 bits
2614  (KeyT(uint32_t(ijk[1]) >> ChildT::TOTAL) << 21) | // y is the middle 21 bits
2615  (KeyT(uint32_t(ijk[0]) >> ChildT::TOTAL) << 42); // x is the upper 21 bits
2616  }
2617  __hostdev__ static CoordT KeyToCoord(const KeyT& key)
2618  {
2619  static constexpr uint64_t MASK = (1u << 21) - 1; // used to mask out 21 lower bits
2620  return CoordT(((key >> 42) & MASK) << ChildT::TOTAL, // x are the upper 21 bits
2621  ((key >> 21) & MASK) << ChildT::TOTAL, // y are the middle 21 bits
2622  ( key & MASK) << ChildT::TOTAL); // z are the lower 21 bits
2623  }
2624 #else
2625  using KeyT = CoordT;
2626  __hostdev__ static KeyT CoordToKey(const CoordT& ijk) { return ijk & ~ChildT::MASK; }
2627  __hostdev__ static CoordT KeyToCoord(const KeyT& key) { return key; }
2628 #endif
2629  math::BBox<CoordT> mBBox; // 24B. AABB of active values in index space.
2630  uint32_t mTableSize; // 4B. number of tiles and child pointers in the root node
2631 
2632  ValueT mBackground; // background value, i.e. value of any unset voxel
2633  ValueT mMinimum; // typically 4B, minimum of all the active values
2634  ValueT mMaximum; // typically 4B, maximum of all the active values
2635  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
2636  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
2637 
2638  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
2639  ///
2640  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
2641  __hostdev__ static constexpr uint32_t padding()
2642  {
2643  return sizeof(RootData) - (24 + 4 + 3 * sizeof(ValueT) + 2 * sizeof(StatsT));
2644  }
2645 
2646  struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) Tile
2647  {
2648  template<typename CoordType>
2649  __hostdev__ void setChild(const CoordType& k, const void* ptr, const RootData* data)
2650  {
2651  key = CoordToKey(k);
2652  state = false;
2653  child = util::PtrDiff(ptr, data);
2654  }
2655  template<typename CoordType, typename ValueType>
2656  __hostdev__ void setValue(const CoordType& k, bool s, const ValueType& v)
2657  {
2658  key = CoordToKey(k);
2659  state = s;
2660  value = v;
2661  child = 0;
2662  }
2663  __hostdev__ bool isChild() const { return child != 0; }
2664  __hostdev__ bool isValue() const { return child == 0; }
2665  __hostdev__ bool isActive() const { return child == 0 && state; }
2666  __hostdev__ CoordT origin() const { return KeyToCoord(key); }
2667  KeyT key; // NANOVDB_USE_SINGLE_ROOT_KEY ? 8B : 12B
2668  int64_t child; // 8B. signed byte offset from this node to the child node. 0 means it is a constant tile, so use value.
2669  uint32_t state; // 4B. state of tile value
2670  ValueT value; // value of tile (i.e. no child node)
2671  }; // Tile
2672 
2673  /// @brief Returns a pointer to the tile at the specified linear offset.
2674  ///
2675  /// @warning The linear offset is assumed to be in the valid range
2676  __hostdev__ const Tile* tile(uint32_t n) const
2677  {
2678  NANOVDB_ASSERT(n < mTableSize);
2679  return reinterpret_cast<const Tile*>(this + 1) + n;
2680  }
2681  __hostdev__ Tile* tile(uint32_t n)
2682  {
2683  NANOVDB_ASSERT(n < mTableSize);
2684  return reinterpret_cast<Tile*>(this + 1) + n;
2685  }
2686 
2687  template<typename DataT>
2688  class TileIter
2689  {
2690  protected:
2693  TileT *mBegin, *mPos, *mEnd;
2694 
2695  public:
2696  __hostdev__ TileIter() : mBegin(nullptr), mPos(nullptr), mEnd(nullptr) {}
2697  __hostdev__ TileIter(DataT* data, uint32_t pos = 0)
2698  : mBegin(reinterpret_cast<TileT*>(data + 1))// tiles reside right after the RootData
2699  , mPos(mBegin + pos)
2700  , mEnd(mBegin + data->mTableSize)
2701  {
2702  NANOVDB_ASSERT(data);
2703  NANOVDB_ASSERT(mBegin <= mPos);// pos > mTableSize is allowed
2704  NANOVDB_ASSERT(mBegin <= mEnd);// mTableSize = 0 is possible
2705  }
2706  __hostdev__ inline operator bool() const { return mPos < mEnd; }
2707  __hostdev__ inline auto pos() const {return mPos - mBegin; }
2709  {
2710  ++mPos;
2711  return *this;
2712  }
2713  __hostdev__ inline TileT& operator*() const
2714  {
2715  NANOVDB_ASSERT(mPos < mEnd);
2716  return *mPos;
2717  }
2718  __hostdev__ inline TileT* operator->() const
2719  {
2720  NANOVDB_ASSERT(mPos < mEnd);
2721  return mPos;
2722  }
2723  __hostdev__ inline DataT* data() const
2724  {
2725  NANOVDB_ASSERT(mBegin);
2726  return reinterpret_cast<DataT*>(mBegin) - 1;
2727  }
2728  __hostdev__ inline bool isChild() const
2729  {
2730  NANOVDB_ASSERT(mPos < mEnd);
2731  return mPos->child != 0;
2732  }
2733  __hostdev__ inline bool isValue() const
2734  {
2735  NANOVDB_ASSERT(mPos < mEnd);
2736  return mPos->child == 0;
2737  }
2738  __hostdev__ inline bool isValueOn() const
2739  {
2740  NANOVDB_ASSERT(mPos < mEnd);
2741  return mPos->child == 0 && mPos->state != 0;
2742  }
2743  __hostdev__ inline NodeT* child() const
2744  {
2745  NANOVDB_ASSERT(mPos < mEnd && mPos->child != 0);
2746  return util::PtrAdd<NodeT>(this->data(), mPos->child);// byte offset relative to RootData::this
2747  }
2748  __hostdev__ inline ValueT value() const
2749  {
2750  NANOVDB_ASSERT(mPos < mEnd && mPos->child == 0);
2751  return mPos->value;
2752  }
2753  };// TileIter
2754 
2757 
2760 
2762  {
2763  const auto key = CoordToKey(ijk);
2764  TileIterator iter(this);
2765  for(; iter; ++iter) if (iter->key == key) break;
2766  return iter;
2767  }
2768 
2769  __hostdev__ inline ConstTileIterator probe(const CoordT& ijk) const
2770  {
2771  const auto key = CoordToKey(ijk);
2772  ConstTileIterator iter(this);
2773  for(; iter; ++iter) if (iter->key == key) break;
2774  return iter;
2775  }
2776 
2777  __hostdev__ inline Tile* probeTile(const CoordT& ijk)
2778  {
2779  auto iter = this->probe(ijk);
2780  return iter ? iter.operator->() : nullptr;
2781  }
2782 
2783  __hostdev__ inline const Tile* probeTile(const CoordT& ijk) const
2784  {
2785  return const_cast<RootData*>(this)->probeTile(ijk);
2786  }
2787 
2788  __hostdev__ inline ChildT* probeChild(const CoordT& ijk)
2789  {
2790  auto iter = this->probe(ijk);
2791  return iter && iter.isChild() ? iter.child() : nullptr;
2792  }
2793 
2794  __hostdev__ inline const ChildT* probeChild(const CoordT& ijk) const
2795  {
2796  return const_cast<RootData*>(this)->probeChild(ijk);
2797  }
2798 
2799  /// @brief Returns a const reference to the child node in the specified tile.
2800  ///
2801  /// @warning A child node is assumed to exist in the specified tile
2802  __hostdev__ ChildT* getChild(const Tile* tile)
2803  {
2804  NANOVDB_ASSERT(tile->child);
2805  return util::PtrAdd<ChildT>(this, tile->child);
2806  }
2807  __hostdev__ const ChildT* getChild(const Tile* tile) const
2808  {
2809  NANOVDB_ASSERT(tile->child);
2810  return util::PtrAdd<ChildT>(this, tile->child);
2811  }
2812 
2813  __hostdev__ const ValueT& getMin() const { return mMinimum; }
2814  __hostdev__ const ValueT& getMax() const { return mMaximum; }
2815  __hostdev__ const StatsT& average() const { return mAverage; }
2816  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
2817 
2818  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
2819  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
2820  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
2821  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
2822 
2823  /// @brief This class cannot be constructed or deleted
2824  RootData() = delete;
2825  RootData(const RootData&) = delete;
2826  RootData& operator=(const RootData&) = delete;
2827  ~RootData() = delete;
2828 }; // RootData
2829 
2830 // --------------------------> RootNode <------------------------------------
2831 
2832 /// @brief Top-most node of the VDB tree structure.
2833 template<typename ChildT>
2834 class RootNode : public RootData<ChildT>
2835 {
2836 public:
2838  using ChildNodeType = ChildT;
2839  using RootType = RootNode<ChildT>; // this allows RootNode to behave like a Tree
2841  using UpperNodeType = ChildT;
2842  using LowerNodeType = typename UpperNodeType::ChildNodeType;
2843  using LeafNodeType = typename ChildT::LeafNodeType;
2844  using ValueType = typename DataType::ValueT;
2845  using FloatType = typename DataType::StatsT;
2846  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
2847 
2848  using CoordType = typename ChildT::CoordType;
2849  using BBoxType = math::BBox<CoordType>;
2851  using Tile = typename DataType::Tile;
2852  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
2853 
2854  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
2855 
2856  template<typename RootT>
2857  class BaseIter
2858  {
2859  protected:
2862  typename DataType::template TileIter<DataT> mTileIter;
2863  __hostdev__ BaseIter() : mTileIter() {}
2864  __hostdev__ BaseIter(DataT* data) : mTileIter(data){}
2865 
2866  public:
2867  __hostdev__ operator bool() const { return bool(mTileIter); }
2868  __hostdev__ uint32_t pos() const { return uint32_t(mTileIter.pos()); }
2869  __hostdev__ TileT* tile() const { return mTileIter.operator->(); }
2870  __hostdev__ CoordType getOrigin() const {return mTileIter->origin();}
2871  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
2872  }; // Member class BaseIter
2873 
2874  template<typename RootT>
2875  class ChildIter : public BaseIter<RootT>
2876  {
2877  static_assert(util::is_same<typename util::remove_const<RootT>::type, RootNode>::value, "Invalid RootT");
2878  using BaseT = BaseIter<RootT>;
2879  using NodeT = typename util::match_const<ChildT, RootT>::type;
2880  using BaseT::mTileIter;
2881 
2882  public:
2884  __hostdev__ ChildIter(RootT* parent) : BaseT(parent->data())
2885  {
2886  while (mTileIter && mTileIter.isValue()) ++mTileIter;
2887  }
2888  __hostdev__ NodeT& operator*() const {return *mTileIter.child();}
2889  __hostdev__ NodeT* operator->() const {return mTileIter.child();}
2891  {
2892  ++mTileIter;
2893  while (mTileIter && mTileIter.isValue()) ++mTileIter;
2894  return *this;
2895  }
2897  {
2898  auto tmp = *this;
2899  this->operator++();
2900  return tmp;
2901  }
2902  }; // Member class ChildIter
2903 
2906 
2909 
2910  template<typename RootT>
2911  class ValueIter : public BaseIter<RootT>
2912  {
2913  using BaseT = BaseIter<RootT>;
2914  using BaseT::mTileIter;
2915 
2916  public:
2918  __hostdev__ ValueIter(RootT* parent) : BaseT(parent->data())
2919  {
2920  while (mTileIter && mTileIter.isChild()) ++mTileIter;
2921  }
2922  __hostdev__ ValueType operator*() const {return mTileIter.value();}
2923  __hostdev__ bool isActive() const {return mTileIter.isValueOn();}
2925  {
2926  ++mTileIter;
2927  while (mTileIter && mTileIter.isChild()) ++mTileIter;
2928  return *this;
2929  }
2931  {
2932  auto tmp = *this;
2933  this->operator++();
2934  return tmp;
2935  }
2936  }; // Member class ValueIter
2937 
2940 
2943 
2944  template<typename RootT>
2945  class ValueOnIter : public BaseIter<RootT>
2946  {
2947  using BaseT = BaseIter<RootT>;
2948  using BaseT::mTileIter;
2949 
2950  public:
2952  __hostdev__ ValueOnIter(RootT* parent) : BaseT(parent->data())
2953  {
2954  while (mTileIter && !mTileIter.isValueOn()) ++mTileIter;
2955  }
2956  __hostdev__ ValueType operator*() const {return mTileIter.value();}
2958  {
2959  ++mTileIter;
2960  while (mTileIter && !mTileIter.isValueOn()) ++mTileIter;
2961  return *this;
2962  }
2964  {
2965  auto tmp = *this;
2966  this->operator++();
2967  return tmp;
2968  }
2969  }; // Member class ValueOnIter
2970 
2973 
2976 
2977  template<typename RootT>
2978  class DenseIter : public BaseIter<RootT>
2979  {
2980  using BaseT = BaseIter<RootT>;
2981  using NodeT = typename util::match_const<ChildT, RootT>::type;
2982  using BaseT::mTileIter;
2983 
2984  public:
2986  __hostdev__ DenseIter(RootT* parent) : BaseT(parent->data()){}
2987  __hostdev__ NodeT* probeChild(ValueType& value) const
2988  {
2989  if (mTileIter.isChild()) return mTileIter.child();
2990  value = mTileIter.value();
2991  return nullptr;
2992  }
2993  __hostdev__ bool isValueOn() const{return mTileIter.isValueOn();}
2995  {
2996  ++mTileIter;
2997  return *this;
2998  }
3000  {
3001  auto tmp = *this;
3002  ++mTileIter;
3003  return tmp;
3004  }
3005  }; // Member class DenseIter
3006 
3009 
3013 
3014  /// @brief This class cannot be constructed or deleted
3015  RootNode() = delete;
3016  RootNode(const RootNode&) = delete;
3017  RootNode& operator=(const RootNode&) = delete;
3018  ~RootNode() = delete;
3019 
3021 
3022  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
3023 
3024  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
3025 
3026  /// @brief Return a const reference to the index bounding box of all the active values in this tree, i.e. in all nodes of the tree
3027  __hostdev__ const BBoxType& bbox() const { return DataType::mBBox; }
3028 
3029  /// @brief Return the total number of active voxels in the root and all its child nodes.
3030 
3031  /// @brief Return a const reference to the background value, i.e. the value associated with
3032  /// any coordinate location that has not been set explicitly.
3033  __hostdev__ const ValueType& background() const { return DataType::mBackground; }
3034 
3035  /// @brief Return the number of tiles encoded in this root node
3036  __hostdev__ const uint32_t& tileCount() const { return DataType::mTableSize; }
3037  __hostdev__ const uint32_t& getTableSize() const { return DataType::mTableSize; }
3038 
3039  /// @brief Return a const reference to the minimum active value encoded in this root node and any of its child nodes
3040  __hostdev__ const ValueType& minimum() const { return DataType::mMinimum; }
3041 
3042  /// @brief Return a const reference to the maximum active value encoded in this root node and any of its child nodes
3043  __hostdev__ const ValueType& maximum() const { return DataType::mMaximum; }
3044 
3045  /// @brief Return a const reference to the average of all the active values encoded in this root node and any of its child nodes
3046  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3047 
3048  /// @brief Return the variance of all the active values encoded in this root node and any of its child nodes
3049  __hostdev__ FloatType variance() const { return math::Pow2(DataType::mStdDevi); }
3050 
3051  /// @brief Return a const reference to the standard deviation of all the active values encoded in this root node and any of its child nodes
3052  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3053 
3054  /// @brief Return the expected memory footprint in bytes with the specified number of tiles
3055  __hostdev__ static uint64_t memUsage(uint32_t tableSize) { return sizeof(RootNode) + tableSize * sizeof(Tile); }
3056 
3057  /// @brief Return the actual memory footprint of this root node
3058  __hostdev__ uint64_t memUsage() const { return sizeof(RootNode) + DataType::mTableSize * sizeof(Tile); }
3059 
3060  /// @brief Return true if this RootNode is empty, i.e. contains no values or nodes
3061  __hostdev__ bool isEmpty() const { return DataType::mTableSize == uint32_t(0); }
3062 
3063  /// @brief Return the value of the given voxel
3064  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3065  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildType>>(CoordType(i, j, k)); }
3066  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3067  /// @brief return the state and updates the value of the specified voxel
3068  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3069  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3070 
3071  template<typename OpT, typename... ArgsT>
3072  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
3073  {
3074  if (const Tile* tile = this->probeTile(ijk)) {
3075  if constexpr(OpT::LEVEL < LEVEL) if (tile->isChild()) return this->getChild(tile)->template get<OpT>(ijk, args...);
3076  return OpT::get(*tile, args...);
3077  }
3078  return OpT::get(*this, args...);
3079  }
3080 
3081  template<typename OpT, typename... ArgsT>
3082  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args)
3083  {
3084  if (Tile* tile = DataType::probeTile(ijk)) {
3085  if constexpr(OpT::LEVEL < LEVEL) if (tile->isChild()) return this->getChild(tile)->template set<OpT>(ijk, args...);
3086  return OpT::set(*tile, args...);
3087  }
3088  return OpT::set(*this, args...);
3089  }
3090 
3091 private:
3092  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData) is misaligned");
3093  static_assert(sizeof(typename DataType::Tile) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(RootData::Tile) is misaligned");
3094 
3095  template<typename, int, int, int>
3096  friend class ReadAccessor;
3097 
3098  template<typename>
3099  friend class Tree;
3100 
3101  template<typename RayT, typename AccT>
3102  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3103  {
3104  if (const Tile* tile = this->probeTile(ijk)) {
3105  if (tile->isChild()) {
3106  const auto* child = this->getChild(tile);
3107  acc.insert(ijk, child);
3108  return child->getDimAndCache(ijk, ray, acc);
3109  }
3110  return 1 << ChildT::TOTAL; //tile value
3111  }
3112  return ChildNodeType::dim(); // background
3113  }
3114 
3115  template<typename OpT, typename AccT, typename... ArgsT>
3116  __hostdev__ typename OpT::Type getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3117  {
3118  if (const Tile* tile = this->probeTile(ijk)) {
3119  if constexpr(OpT::LEVEL < LEVEL) {
3120  if (tile->isChild()) {
3121  const ChildT* child = this->getChild(tile);
3122  acc.insert(ijk, child);
3123  return child->template getAndCache<OpT>(ijk, acc, args...);
3124  }
3125  }
3126  return OpT::get(*tile, args...);
3127  }
3128  return OpT::get(*this, args...);
3129  }
3130 
3131  template<typename OpT, typename AccT, typename... ArgsT>
3132  __hostdev__ void setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3133  {
3134  if (Tile* tile = DataType::probeTile(ijk)) {
3135  if constexpr(OpT::LEVEL < LEVEL) {
3136  if (tile->isChild()) {
3137  ChildT* child = this->getChild(tile);
3138  acc.insert(ijk, child);
3139  return child->template setAndCache<OpT>(ijk, acc, args...);
3140  }
3141  }
3142  return OpT::set(*tile, args...);
3143  }
3144  return OpT::set(*this, args...);
3145  }
3146 
3147 }; // RootNode class
3148 
3149 // After the RootNode the memory layout is assumed to be the sorted Tiles
3150 
3151 // --------------------------> InternalNode <------------------------------------
3152 
3153 /// @brief Struct with all the member data of the InternalNode (useful during serialization of an openvdb InternalNode)
3154 ///
3155 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3156 template<typename ChildT, uint32_t LOG2DIM>
3157 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) InternalData
3158 {
3159  using ValueT = typename ChildT::ValueType;
3160  using BuildT = typename ChildT::BuildType; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3161  using StatsT = typename ChildT::FloatType;
3162  using CoordT = typename ChildT::CoordType;
3163  using MaskT = typename ChildT::template MaskType<LOG2DIM>;
3164  static constexpr bool FIXED_SIZE = true;
3165 
3166  union Tile
3167  {
3169  int64_t child; //signed 64 bit byte offset relative to this InternalData, i.e. child-pointer = Tile::child + this
3170  /// @brief This class cannot be constructed or deleted
3171  Tile() = delete;
3172  Tile(const Tile&) = delete;
3173  Tile& operator=(const Tile&) = delete;
3174  ~Tile() = delete;
3175  };
3176 
3177  math::BBox<CoordT> mBBox; // 24B. node bounding box. |
3178  uint64_t mFlags; // 8B. node flags. | 32B aligned
3179  MaskT mValueMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3180  MaskT mChildMask; // LOG2DIM(5): 4096B, LOG2DIM(4): 512B | 32B aligned
3181 
3182  ValueT mMinimum; // typically 4B
3183  ValueT mMaximum; // typically 4B
3184  StatsT mAverage; // typically 4B, average of all the active values in this node and its child nodes
3185  StatsT mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3186  // possible padding, e.g. 28 byte padding when ValueType = bool
3187 
3188  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3189  ///
3190  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3191  __hostdev__ static constexpr uint32_t padding()
3192  {
3193  return sizeof(InternalData) - (24u + 8u + 2 * (sizeof(MaskT) + sizeof(ValueT) + sizeof(StatsT)) + (1u << (3 * LOG2DIM)) * (sizeof(ValueT) > 8u ? sizeof(ValueT) : 8u));
3194  }
3195  alignas(32) Tile mTable[1u << (3 * LOG2DIM)]; // sizeof(ValueT) x (16*16*16 or 32*32*32)
3196 
3197  __hostdev__ static uint64_t memUsage() { return sizeof(InternalData); }
3198 
3199  __hostdev__ void setChild(uint32_t n, const void* ptr)
3200  {
3201  NANOVDB_ASSERT(mChildMask.isOn(n));
3202  mTable[n].child = util::PtrDiff(ptr, this);
3203  }
3204 
3205  template<typename ValueT>
3206  __hostdev__ void setValue(uint32_t n, const ValueT& v)
3207  {
3208  NANOVDB_ASSERT(!mChildMask.isOn(n));
3209  mTable[n].value = v;
3210  }
3211 
3212  /// @brief Returns a pointer to the child node at the specifed linear offset.
3213  __hostdev__ ChildT* getChild(uint32_t n)
3214  {
3215  NANOVDB_ASSERT(mChildMask.isOn(n));
3216  return util::PtrAdd<ChildT>(this, mTable[n].child);
3217  }
3218  __hostdev__ const ChildT* getChild(uint32_t n) const
3219  {
3220  NANOVDB_ASSERT(mChildMask.isOn(n));
3221  return util::PtrAdd<ChildT>(this, mTable[n].child);
3222  }
3223 
3224  __hostdev__ ValueT getValue(uint32_t n) const
3225  {
3226  NANOVDB_ASSERT(mChildMask.isOff(n));
3227  return mTable[n].value;
3228  }
3229 
3230  __hostdev__ bool isActive(uint32_t n) const
3231  {
3232  NANOVDB_ASSERT(mChildMask.isOff(n));
3233  return mValueMask.isOn(n);
3234  }
3235 
3236  __hostdev__ bool isChild(uint32_t n) const { return mChildMask.isOn(n); }
3237 
3238  template<typename T>
3239  __hostdev__ void setOrigin(const T& ijk) { mBBox[0] = ijk; }
3240 
3241  __hostdev__ const ValueT& getMin() const { return mMinimum; }
3242  __hostdev__ const ValueT& getMax() const { return mMaximum; }
3243  __hostdev__ const StatsT& average() const { return mAverage; }
3244  __hostdev__ const StatsT& stdDeviation() const { return mStdDevi; }
3245 
3246 // GCC 13 (and possibly prior versions) has a regression that results in invalid
3247 // warnings when -Wstringop-overflow is turned on. For details, refer to
3248 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101854
3249 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106757
3250 #if defined(__GNUC__) && (__GNUC__ < 14) && !defined(__APPLE__) && !defined(__llvm__)
3251 #pragma GCC diagnostic push
3252 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3253 #endif
3254  __hostdev__ void setMin(const ValueT& v) { mMinimum = v; }
3255  __hostdev__ void setMax(const ValueT& v) { mMaximum = v; }
3256  __hostdev__ void setAvg(const StatsT& v) { mAverage = v; }
3257  __hostdev__ void setDev(const StatsT& v) { mStdDevi = v; }
3258 #if defined(__GNUC__) && (__GNUC__ < 14) && !defined(__APPLE__) && !defined(__llvm__)
3259 #pragma GCC diagnostic pop
3260 #endif
3261 
3262  /// @brief This class cannot be constructed or deleted
3263  InternalData() = delete;
3264  InternalData(const InternalData&) = delete;
3265  InternalData& operator=(const InternalData&) = delete;
3266  ~InternalData() = delete;
3267 }; // InternalData
3268 
3269 /// @brief Internal nodes of a VDB tree
3270 template<typename ChildT, uint32_t Log2Dim = ChildT::LOG2DIM + 1>
3271 class InternalNode : public InternalData<ChildT, Log2Dim>
3272 {
3273 public:
3275  using ValueType = typename DataType::ValueT;
3276  using FloatType = typename DataType::StatsT;
3277  using BuildType = typename DataType::BuildT; // in rare cases BuildType != ValueType, e.g. then BuildType = ValueMask and ValueType = bool
3278  using LeafNodeType = typename ChildT::LeafNodeType;
3279  using ChildNodeType = ChildT;
3280  using CoordType = typename ChildT::CoordType;
3281  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
3282  template<uint32_t LOG2>
3283  using MaskType = typename ChildT::template MaskType<LOG2>;
3284  template<bool On>
3285  using MaskIterT = typename Mask<Log2Dim>::template Iterator<On>;
3286 
3287  static constexpr uint32_t LOG2DIM = Log2Dim;
3288  static constexpr uint32_t TOTAL = LOG2DIM + ChildT::TOTAL; // dimension in index space
3289  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
3290  static constexpr uint32_t SIZE = 1u << (3 * LOG2DIM); // number of tile values (or child pointers)
3291  static constexpr uint32_t MASK = (1u << TOTAL) - 1u;
3292  static constexpr uint32_t LEVEL = 1 + ChildT::LEVEL; // level 0 = leaf
3293  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
3294 
3295  /// @brief Visits child nodes of this node only
3296  template <typename ParentT>
3297  class ChildIter : public MaskIterT<true>
3298  {
3299  static_assert(util::is_same<typename util::remove_const<ParentT>::type, InternalNode>::value, "Invalid ParentT");
3300  using BaseT = MaskIterT<true>;
3301  using NodeT = typename util::match_const<ChildT, ParentT>::type;
3302  ParentT* mParent;
3303 
3304  public:
3306  : BaseT()
3307  , mParent(nullptr)
3308  {
3309  }
3310  __hostdev__ ChildIter(ParentT* parent)
3311  : BaseT(parent->mChildMask.beginOn())
3312  , mParent(parent)
3313  {
3314  }
3315  ChildIter& operator=(const ChildIter&) = default;
3316  __hostdev__ NodeT& operator*() const
3317  {
3318  NANOVDB_ASSERT(*this);
3319  return *mParent->getChild(BaseT::pos());
3320  }
3321  __hostdev__ NodeT* operator->() const
3322  {
3323  NANOVDB_ASSERT(*this);
3324  return mParent->getChild(BaseT::pos());
3325  }
3327  {
3328  NANOVDB_ASSERT(*this);
3329  return (*this)->origin();
3330  }
3331  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3332  }; // Member class ChildIter
3333 
3336 
3339 
3340  /// @brief Visits all tile values in this node, i.e. both inactive and active tiles
3341  class ValueIterator : public MaskIterT<false>
3342  {
3343  using BaseT = MaskIterT<false>;
3344  const InternalNode* mParent;
3345 
3346  public:
3348  : BaseT()
3349  , mParent(nullptr)
3350  {
3351  }
3353  : BaseT(parent->data()->mChildMask.beginOff())
3354  , mParent(parent)
3355  {
3356  }
3357  ValueIterator& operator=(const ValueIterator&) = default;
3359  {
3360  NANOVDB_ASSERT(*this);
3361  return mParent->data()->getValue(BaseT::pos());
3362  }
3364  {
3365  NANOVDB_ASSERT(*this);
3366  return mParent->offsetToGlobalCoord(BaseT::pos());
3367  }
3368  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3369  __hostdev__ bool isActive() const
3370  {
3371  NANOVDB_ASSERT(*this);
3372  return mParent->data()->isActive(BaseT::mPos);
3373  }
3374  }; // Member class ValueIterator
3375 
3378 
3379  /// @brief Visits active tile values of this node only
3380  class ValueOnIterator : public MaskIterT<true>
3381  {
3382  using BaseT = MaskIterT<true>;
3383  const InternalNode* mParent;
3384 
3385  public:
3387  : BaseT()
3388  , mParent(nullptr)
3389  {
3390  }
3392  : BaseT(parent->data()->mValueMask.beginOn())
3393  , mParent(parent)
3394  {
3395  }
3396  ValueOnIterator& operator=(const ValueOnIterator&) = default;
3398  {
3399  NANOVDB_ASSERT(*this);
3400  return mParent->data()->getValue(BaseT::pos());
3401  }
3403  {
3404  NANOVDB_ASSERT(*this);
3405  return mParent->offsetToGlobalCoord(BaseT::pos());
3406  }
3407  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3408  }; // Member class ValueOnIterator
3409 
3412 
3413  /// @brief Visits all tile values and child nodes of this node
3414  class DenseIterator : public Mask<Log2Dim>::DenseIterator
3415  {
3416  using BaseT = typename Mask<Log2Dim>::DenseIterator;
3417  const DataType* mParent;
3418 
3419  public:
3421  : BaseT()
3422  , mParent(nullptr)
3423  {
3424  }
3426  : BaseT(0)
3427  , mParent(parent->data())
3428  {
3429  }
3430  DenseIterator& operator=(const DenseIterator&) = default;
3431  __hostdev__ const ChildT* probeChild(ValueType& value) const
3432  {
3433  NANOVDB_ASSERT(mParent && bool(*this));
3434  const ChildT* child = nullptr;
3435  if (mParent->mChildMask.isOn(BaseT::pos())) {
3436  child = mParent->getChild(BaseT::pos());
3437  } else {
3438  value = mParent->getValue(BaseT::pos());
3439  }
3440  return child;
3441  }
3442  __hostdev__ bool isValueOn() const
3443  {
3444  NANOVDB_ASSERT(mParent && bool(*this));
3445  return mParent->isActive(BaseT::pos());
3446  }
3448  {
3449  NANOVDB_ASSERT(mParent && bool(*this));
3450  return mParent->offsetToGlobalCoord(BaseT::pos());
3451  }
3452  __hostdev__ CoordType getCoord() const {return this->getOrigin();}
3453  }; // Member class DenseIterator
3454 
3456  __hostdev__ DenseIterator cbeginChildAll() const { return DenseIterator(this); } // matches openvdb
3457 
3458  /// @brief This class cannot be constructed or deleted
3459  InternalNode() = delete;
3460  InternalNode(const InternalNode&) = delete;
3461  InternalNode& operator=(const InternalNode&) = delete;
3462  ~InternalNode() = delete;
3463 
3464  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
3465 
3466  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
3467 
3468  /// @brief Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32)
3469  __hostdev__ static uint32_t dim() { return 1u << TOTAL; }
3470 
3471  /// @brief Return memory usage in bytes for the class
3472  __hostdev__ static size_t memUsage() { return DataType::memUsage(); }
3473 
3474  /// @brief Return a const reference to the bit mask of active voxels in this internal node
3475  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
3476  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
3477 
3478  /// @brief Return a const reference to the bit mask of child nodes in this internal node
3479  __hostdev__ const MaskType<LOG2DIM>& childMask() const { return DataType::mChildMask; }
3480  __hostdev__ const MaskType<LOG2DIM>& getChildMask() const { return DataType::mChildMask; }
3481 
3482  /// @brief Return the origin in index space of this leaf node
3483  __hostdev__ CoordType origin() const { return DataType::mBBox.min() & ~MASK; }
3484 
3485  /// @brief Return a const reference to the minimum active value encoded in this internal node and any of its child nodes
3486  __hostdev__ const ValueType& minimum() const { return this->getMin(); }
3487 
3488  /// @brief Return a const reference to the maximum active value encoded in this internal node and any of its child nodes
3489  __hostdev__ const ValueType& maximum() const { return this->getMax(); }
3490 
3491  /// @brief Return a const reference to the average of all the active values encoded in this internal node and any of its child nodes
3492  __hostdev__ const FloatType& average() const { return DataType::mAverage; }
3493 
3494  /// @brief Return the variance of all the active values encoded in this internal node and any of its child nodes
3495  __hostdev__ FloatType variance() const { return DataType::mStdDevi * DataType::mStdDevi; }
3496 
3497  /// @brief Return a const reference to the standard deviation of all the active values encoded in this internal node and any of its child nodes
3498  __hostdev__ const FloatType& stdDeviation() const { return DataType::mStdDevi; }
3499 
3500  /// @brief Return a const reference to the bounding box in index space of active values in this internal node and any of its child nodes
3501  __hostdev__ const math::BBox<CoordType>& bbox() const { return DataType::mBBox; }
3502 
3503  /// @brief If the first entry in this node's table is a tile, return the tile's value.
3504  /// Otherwise, return the result of calling getFirstValue() on the child.
3506  {
3507  return DataType::mChildMask.isOn(0) ? this->getChild(0)->getFirstValue() : DataType::getValue(0);
3508  }
3509 
3510  /// @brief If the last entry in this node's table is a tile, return the tile's value.
3511  /// Otherwise, return the result of calling getLastValue() on the child.
3513  {
3514  return DataType::mChildMask.isOn(SIZE - 1) ? this->getChild(SIZE - 1)->getLastValue() : DataType::getValue(SIZE - 1);
3515  }
3516 
3517  /// @brief Return the value of the given voxel
3518  __hostdev__ ValueType getValue(const CoordType& ijk) const { return this->template get<GetValue<BuildType>>(ijk); }
3519  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildType>>(ijk); }
3520  /// @brief return the state and updates the value of the specified voxel
3521  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildType>>(ijk, v); }
3522  __hostdev__ const LeafNodeType* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildType>>(ijk); }
3523 
3525  {
3526  const uint32_t n = CoordToOffset(ijk);
3527  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3528  }
3530  {
3531  const uint32_t n = CoordToOffset(ijk);
3532  return DataType::mChildMask.isOn(n) ? this->getChild(n) : nullptr;
3533  }
3534 
3535  /// @brief Return the linear offset corresponding to the given coordinate
3536  __hostdev__ static uint32_t CoordToOffset(const CoordType& ijk)
3537  {
3538  return (((ijk[0] & MASK) >> ChildT::TOTAL) << (2 * LOG2DIM)) | // note, we're using bitwise OR instead of +
3539  (((ijk[1] & MASK) >> ChildT::TOTAL) << (LOG2DIM)) |
3540  ((ijk[2] & MASK) >> ChildT::TOTAL);
3541  }
3542 
3543  /// @return the local coordinate of the n'th tile or child node
3544  __hostdev__ static Coord OffsetToLocalCoord(uint32_t n)
3545  {
3546  NANOVDB_ASSERT(n < SIZE);
3547  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
3548  return Coord(n >> 2 * LOG2DIM, m >> LOG2DIM, m & ((1 << LOG2DIM) - 1));
3549  }
3550 
3551  /// @brief modifies local coordinates to global coordinates of a tile or child node
3552  __hostdev__ void localToGlobalCoord(Coord& ijk) const
3553  {
3554  ijk <<= ChildT::TOTAL;
3555  ijk += this->origin();
3556  }
3557 
3558  __hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
3559  {
3560  Coord ijk = InternalNode::OffsetToLocalCoord(n);
3561  this->localToGlobalCoord(ijk);
3562  return ijk;
3563  }
3564 
3565  /// @brief Return true if this node or any of its child nodes contain active values
3566  __hostdev__ bool isActive() const { return DataType::mFlags & uint32_t(2); }
3567 
3568  template<typename OpT, typename... ArgsT>
3569  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
3570  {
3571  const uint32_t n = CoordToOffset(ijk);
3572  if constexpr(OpT::LEVEL < LEVEL) if (this->isChild(n)) return this->getChild(n)->template get<OpT>(ijk, args...);
3573  return OpT::get(*this, n, args...);
3574  }
3575 
3576  template<typename OpT, typename... ArgsT>
3577  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args)
3578  {
3579  const uint32_t n = CoordToOffset(ijk);
3580  if constexpr(OpT::LEVEL < LEVEL) if (this->isChild(n)) return this->getChild(n)->template set<OpT>(ijk, args...);
3581  return OpT::set(*this, n, args...);
3582  }
3583 
3584 private:
3585  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(InternalData) is misaligned");
3586 
3587  template<typename, int, int, int>
3588  friend class ReadAccessor;
3589 
3590  template<typename>
3591  friend class RootNode;
3592  template<typename, uint32_t>
3593  friend class InternalNode;
3594 
3595  template<typename RayT, typename AccT>
3596  __hostdev__ uint32_t getDimAndCache(const CoordType& ijk, const RayT& ray, const AccT& acc) const
3597  {
3598  if (DataType::mFlags & uint32_t(1u))
3599  return this->dim(); // skip this node if the 1st bit is set
3600  //if (!ray.intersects( this->bbox() )) return 1<<TOTAL;
3601 
3602  const uint32_t n = CoordToOffset(ijk);
3603  if (DataType::mChildMask.isOn(n)) {
3604  const ChildT* child = this->getChild(n);
3605  acc.insert(ijk, child);
3606  return child->getDimAndCache(ijk, ray, acc);
3607  }
3608  return ChildNodeType::dim(); // tile value
3609  }
3610 
3611  template<typename OpT, typename AccT, typename... ArgsT>
3612  __hostdev__ typename OpT::Type getAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args) const
3613  {
3614  const uint32_t n = CoordToOffset(ijk);
3615  if constexpr(OpT::LEVEL < LEVEL) {
3616  if (this->isChild(n)) {
3617  const ChildT* child = this->getChild(n);
3618  acc.insert(ijk, child);
3619  return child->template getAndCache<OpT>(ijk, acc, args...);
3620  }
3621  }
3622  return OpT::get(*this, n, args...);
3623  }
3624 
3625  template<typename OpT, typename AccT, typename... ArgsT>
3626  __hostdev__ void setAndCache(const CoordType& ijk, const AccT& acc, ArgsT&&... args)
3627  {
3628  const uint32_t n = CoordToOffset(ijk);
3629  if constexpr(OpT::LEVEL < LEVEL) {
3630  if (this->isChild(n)) {
3631  ChildT* child = this->getChild(n);
3632  acc.insert(ijk, child);
3633  return child->template setAndCache<OpT>(ijk, acc, args...);
3634  }
3635  }
3636  return OpT::set(*this, n, args...);
3637  }
3638 
3639 }; // InternalNode class
3640 
3641 // --------------------------> LeafData<T> <------------------------------------
3642 
3643 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3644 ///
3645 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3646 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3647 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData
3648 {
3649  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3650  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3651  using ValueType = ValueT;
3652  using BuildType = ValueT;
3654  using ArrayType = ValueT; // type used for the internal mValue array
3655  static constexpr bool FIXED_SIZE = true;
3656 
3657  CoordT mBBoxMin; // 12B.
3658  uint8_t mBBoxDif[3]; // 3B.
3659  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3660  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3661 
3662  ValueType mMinimum; // typically 4B
3663  ValueType mMaximum; // typically 4B
3664  FloatType mAverage; // typically 4B, average of all the active values in this node and its child nodes
3665  FloatType mStdDevi; // typically 4B, standard deviation of all the active values in this node and its child nodes
3666  alignas(32) ValueType mValues[1u << 3 * LOG2DIM];
3667 
3668  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3669  ///
3670  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3671  __hostdev__ static constexpr uint32_t padding()
3672  {
3673  return sizeof(LeafData) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * (sizeof(ValueT) + sizeof(FloatType)) + (1u << (3 * LOG2DIM)) * sizeof(ValueT));
3674  }
3675  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3676 
3677  __hostdev__ static bool hasStats() { return true; }
3678 
3679  __hostdev__ ValueType getValue(uint32_t i) const { return mValues[i]; }
3680  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& value) { mValues[offset] = value; }
3681  __hostdev__ void setValue(uint32_t offset, const ValueType& value)
3682  {
3683  mValueMask.setOn(offset);
3684  mValues[offset] = value;
3685  }
3686  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3687 
3688  __hostdev__ ValueType getMin() const { return mMinimum; }
3689  __hostdev__ ValueType getMax() const { return mMaximum; }
3690  __hostdev__ FloatType getAvg() const { return mAverage; }
3691  __hostdev__ FloatType getDev() const { return mStdDevi; }
3692 
3693 // GCC 11 (and possibly prior versions) has a regression that results in invalid
3694 // warnings when -Wstringop-overflow is turned on. For details, refer to
3695 // https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101854
3696 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3697 #pragma GCC diagnostic push
3698 #pragma GCC diagnostic ignored "-Wstringop-overflow"
3699 #endif
3700  __hostdev__ void setMin(const ValueType& v) { mMinimum = v; }
3701  __hostdev__ void setMax(const ValueType& v) { mMaximum = v; }
3702  __hostdev__ void setAvg(const FloatType& v) { mAverage = v; }
3703  __hostdev__ void setDev(const FloatType& v) { mStdDevi = v; }
3704 #if defined(__GNUC__) && (__GNUC__ < 12) && !defined(__APPLE__) && !defined(__llvm__)
3705 #pragma GCC diagnostic pop
3706 #endif
3707 
3708  template<typename T>
3709  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3710 
3711  __hostdev__ void fill(const ValueType& v)
3712  {
3713  for (auto *p = mValues, *q = p + 512; p != q; ++p)
3714  *p = v;
3715  }
3716 
3717  /// @brief This class cannot be constructed or deleted
3718  LeafData() = delete;
3719  LeafData(const LeafData&) = delete;
3720  LeafData& operator=(const LeafData&) = delete;
3721  ~LeafData() = delete;
3722 }; // LeafData<ValueT>
3723 
3724 // --------------------------> LeafFnBase <------------------------------------
3725 
3726 /// @brief Base-class for quantized float leaf nodes
3727 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3728 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafFnBase
3729 {
3730  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3731  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3732  using ValueType = float;
3733  using FloatType = float;
3734 
3735  CoordT mBBoxMin; // 12B.
3736  uint8_t mBBoxDif[3]; // 3B.
3737  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3738  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3739 
3740  float mMinimum; // 4B - minimum of ALL values in this node
3741  float mQuantum; // = (max - min)/15 4B
3742  uint16_t mMin, mMax, mAvg, mDev; // quantized representations of statistics of active values
3743  // no padding since it's always 32B aligned
3744  __hostdev__ static uint64_t memUsage() { return sizeof(LeafFnBase); }
3745 
3746  __hostdev__ static bool hasStats() { return true; }
3747 
3748  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
3749  ///
3750  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
3751  __hostdev__ static constexpr uint32_t padding()
3752  {
3753  return sizeof(LeafFnBase) - (12 + 3 + 1 + sizeof(MaskT<LOG2DIM>) + 2 * 4 + 4 * 2);
3754  }
3755  __hostdev__ void init(float min, float max, uint8_t bitWidth)
3756  {
3757  mMinimum = min;
3758  mQuantum = (max - min) / float((1 << bitWidth) - 1);
3759  }
3760 
3761  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3762 
3763  /// @brief return the quantized minimum of the active values in this node
3764  __hostdev__ float getMin() const { return mMin * mQuantum + mMinimum; }
3765 
3766  /// @brief return the quantized maximum of the active values in this node
3767  __hostdev__ float getMax() const { return mMax * mQuantum + mMinimum; }
3768 
3769  /// @brief return the quantized average of the active values in this node
3770  __hostdev__ float getAvg() const { return mAvg * mQuantum + mMinimum; }
3771  /// @brief return the quantized standard deviation of the active values in this node
3772 
3773  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3774  __hostdev__ float getDev() const { return mDev * mQuantum; }
3775 
3776  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3777  __hostdev__ void setMin(float min) { mMin = uint16_t((min - mMinimum) / mQuantum + 0.5f); }
3778 
3779  /// @note min <= X <= max or 0 <= (X-min)/(min-max) <= 1
3780  __hostdev__ void setMax(float max) { mMax = uint16_t((max - mMinimum) / mQuantum + 0.5f); }
3781 
3782  /// @note min <= avg <= max or 0 <= (avg-min)/(min-max) <= 1
3783  __hostdev__ void setAvg(float avg) { mAvg = uint16_t((avg - mMinimum) / mQuantum + 0.5f); }
3784 
3785  /// @note 0 <= StdDev <= max-min or 0 <= StdDev/(max-min) <= 1
3786  __hostdev__ void setDev(float dev) { mDev = uint16_t(dev / mQuantum + 0.5f); }
3787 
3788  template<typename T>
3789  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
3790 }; // LeafFnBase
3791 
3792 // --------------------------> LeafData<Fp4> <------------------------------------
3793 
3794 /// @brief Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode)
3795 ///
3796 /// @note No client code should (or can) interface with this struct so it can safely be ignored!
3797 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3798 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp4, CoordT, MaskT, LOG2DIM>
3799  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3800 {
3802  using BuildType = Fp4;
3803  using ArrayType = uint8_t; // type used for the internal mValue array
3804  static constexpr bool FIXED_SIZE = true;
3805  alignas(32) uint8_t mCode[1u << (3 * LOG2DIM - 1)]; // LeafFnBase is 32B aligned and so is mCode
3806 
3807  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3808  __hostdev__ static constexpr uint32_t padding()
3809  {
3810  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3811  return sizeof(LeafData) - sizeof(BaseT) - (1u << (3 * LOG2DIM - 1));
3812  }
3813 
3814  __hostdev__ static constexpr uint8_t bitWidth() { return 4u; }
3815  __hostdev__ float getValue(uint32_t i) const
3816  {
3817 #if 0
3818  const uint8_t c = mCode[i>>1];
3819  return ( (i&1) ? c >> 4 : c & uint8_t(15) )*BaseT::mQuantum + BaseT::mMinimum;
3820 #else
3821  return ((mCode[i >> 1] >> ((i & 1) << 2)) & uint8_t(15)) * BaseT::mQuantum + BaseT::mMinimum;
3822 #endif
3823  }
3824 
3825  /// @brief This class cannot be constructed or deleted
3826  LeafData() = delete;
3827  LeafData(const LeafData&) = delete;
3828  LeafData& operator=(const LeafData&) = delete;
3829  ~LeafData() = delete;
3830 }; // LeafData<Fp4>
3831 
3832 // --------------------------> LeafBase<Fp8> <------------------------------------
3833 
3834 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3835 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp8, CoordT, MaskT, LOG2DIM>
3836  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3837 {
3839  using BuildType = Fp8;
3840  using ArrayType = uint8_t; // type used for the internal mValue array
3841  static constexpr bool FIXED_SIZE = true;
3842  alignas(32) uint8_t mCode[1u << 3 * LOG2DIM];
3843  __hostdev__ static constexpr int64_t memUsage() { return sizeof(LeafData); }
3844  __hostdev__ static constexpr uint32_t padding()
3845  {
3846  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3847  return sizeof(LeafData) - sizeof(BaseT) - (1u << 3 * LOG2DIM);
3848  }
3849 
3850  __hostdev__ static constexpr uint8_t bitWidth() { return 8u; }
3851  __hostdev__ float getValue(uint32_t i) const
3852  {
3853  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/255 + min
3854  }
3855  /// @brief This class cannot be constructed or deleted
3856  LeafData() = delete;
3857  LeafData(const LeafData&) = delete;
3858  LeafData& operator=(const LeafData&) = delete;
3859  ~LeafData() = delete;
3860 }; // LeafData<Fp8>
3861 
3862 // --------------------------> LeafData<Fp16> <------------------------------------
3863 
3864 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3865 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Fp16, CoordT, MaskT, LOG2DIM>
3866  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3867 {
3869  using BuildType = Fp16;
3870  using ArrayType = uint16_t; // type used for the internal mValue array
3871  static constexpr bool FIXED_SIZE = true;
3872  alignas(32) uint16_t mCode[1u << 3 * LOG2DIM];
3873 
3874  __hostdev__ static constexpr uint64_t memUsage() { return sizeof(LeafData); }
3875  __hostdev__ static constexpr uint32_t padding()
3876  {
3877  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3878  return sizeof(LeafData) - sizeof(BaseT) - 2 * (1u << 3 * LOG2DIM);
3879  }
3880 
3881  __hostdev__ static constexpr uint8_t bitWidth() { return 16u; }
3882  __hostdev__ float getValue(uint32_t i) const
3883  {
3884  return mCode[i] * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/65535 + min
3885  }
3886 
3887  /// @brief This class cannot be constructed or deleted
3888  LeafData() = delete;
3889  LeafData(const LeafData&) = delete;
3890  LeafData& operator=(const LeafData&) = delete;
3891  ~LeafData() = delete;
3892 }; // LeafData<Fp16>
3893 
3894 // --------------------------> LeafData<FpN> <------------------------------------
3895 
3896 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3897 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<FpN, CoordT, MaskT, LOG2DIM>
3898  : public LeafFnBase<CoordT, MaskT, LOG2DIM>
3899 { // this class has no additional data members, however every instance is immediately followed by
3900  // bitWidth*64 bytes. Since its base class is 32B aligned so are the bitWidth*64 bytes
3902  using BuildType = FpN;
3903  static constexpr bool FIXED_SIZE = false;
3904  __hostdev__ static constexpr uint32_t padding()
3905  {
3906  static_assert(BaseT::padding() == 0, "expected no padding in LeafFnBase");
3907  return 0;
3908  }
3909 
3910  __hostdev__ uint8_t bitWidth() const { return 1 << (BaseT::mFlags >> 5); } // 4,8,16,32 = 2^(2,3,4,5)
3911  __hostdev__ size_t memUsage() const { return sizeof(*this) + this->bitWidth() * 64; }
3912  __hostdev__ static size_t memUsage(uint32_t bitWidth) { return 96u + bitWidth * 64; }
3913  __hostdev__ float getValue(uint32_t i) const
3914  {
3915 #ifdef NANOVDB_FPN_BRANCHLESS // faster
3916  const int b = BaseT::mFlags >> 5; // b = 0, 1, 2, 3, 4 corresponding to 1, 2, 4, 8, 16 bits
3917 #if 0 // use LUT
3918  uint16_t code = reinterpret_cast<const uint16_t*>(this + 1)[i >> (4 - b)];
3919  const static uint8_t shift[5] = {15, 7, 3, 1, 0};
3920  const static uint16_t mask[5] = {1, 3, 15, 255, 65535};
3921  code >>= (i & shift[b]) << b;
3922  code &= mask[b];
3923 #else // no LUT
3924  uint32_t code = reinterpret_cast<const uint32_t*>(this + 1)[i >> (5 - b)];
3925  code >>= (i & ((32 >> b) - 1)) << b;
3926  code &= (1 << (1 << b)) - 1;
3927 #endif
3928 #else // use branched version (slow)
3929  float code;
3930  auto* values = reinterpret_cast<const uint8_t*>(this + 1);
3931  switch (BaseT::mFlags >> 5) {
3932  case 0u: // 1 bit float
3933  code = float((values[i >> 3] >> (i & 7)) & uint8_t(1));
3934  break;
3935  case 1u: // 2 bits float
3936  code = float((values[i >> 2] >> ((i & 3) << 1)) & uint8_t(3));
3937  break;
3938  case 2u: // 4 bits float
3939  code = float((values[i >> 1] >> ((i & 1) << 2)) & uint8_t(15));
3940  break;
3941  case 3u: // 8 bits float
3942  code = float(values[i]);
3943  break;
3944  default: // 16 bits float
3945  code = float(reinterpret_cast<const uint16_t*>(values)[i]);
3946  }
3947 #endif
3948  return float(code) * BaseT::mQuantum + BaseT::mMinimum; // code * (max-min)/UNITS + min
3949  }
3950 
3951  /// @brief This class cannot be constructed or deleted
3952  LeafData() = delete;
3953  LeafData(const LeafData&) = delete;
3954  LeafData& operator=(const LeafData&) = delete;
3955  ~LeafData() = delete;
3956 }; // LeafData<FpN>
3957 
3958 // --------------------------> LeafData<bool> <------------------------------------
3959 
3960 // Partial template specialization of LeafData with bool
3961 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
3962 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<bool, CoordT, MaskT, LOG2DIM>
3963 {
3964  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
3965  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
3966  using ValueType = bool;
3967  using BuildType = bool;
3968  using FloatType = bool; // dummy value type
3969  using ArrayType = MaskT<LOG2DIM>; // type used for the internal mValue array
3970  static constexpr bool FIXED_SIZE = true;
3971 
3972  CoordT mBBoxMin; // 12B.
3973  uint8_t mBBoxDif[3]; // 3B.
3974  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
3975  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
3976  MaskT<LOG2DIM> mValues; // LOG2DIM(3): 64B.
3977  uint64_t mPadding[2]; // 16B padding to 32B alignment
3978 
3979  __hostdev__ static constexpr uint32_t padding() { return sizeof(LeafData) - 12u - 3u - 1u - 2 * sizeof(MaskT<LOG2DIM>) - 16u; }
3980  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
3981  __hostdev__ static bool hasStats() { return false; }
3982  __hostdev__ bool getValue(uint32_t i) const { return mValues.isOn(i); }
3983  __hostdev__ bool getMin() const { return false; } // dummy
3984  __hostdev__ bool getMax() const { return false; } // dummy
3985  __hostdev__ bool getAvg() const { return false; } // dummy
3986  __hostdev__ bool getDev() const { return false; } // dummy
3987  __hostdev__ void setValue(uint32_t offset, bool v)
3988  {
3989  mValueMask.setOn(offset);
3990  mValues.set(offset, v);
3991  }
3992  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
3993  __hostdev__ void setMin(const bool&) {} // no-op
3994  __hostdev__ void setMax(const bool&) {} // no-op
3995  __hostdev__ void setAvg(const bool&) {} // no-op
3996  __hostdev__ void setDev(const bool&) {} // no-op
3997 
3998  template<typename T>
3999  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4000 
4001  /// @brief This class cannot be constructed or deleted
4002  LeafData() = delete;
4003  LeafData(const LeafData&) = delete;
4004  LeafData& operator=(const LeafData&) = delete;
4005  ~LeafData() = delete;
4006 }; // LeafData<bool>
4007 
4008 // --------------------------> LeafData<ValueMask> <------------------------------------
4009 
4010 // Partial template specialization of LeafData with ValueMask
4011 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4012 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueMask, CoordT, MaskT, LOG2DIM>
4013 {
4014  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4015  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4016  using ValueType = bool;
4018  using FloatType = bool; // dummy value type
4019  using ArrayType = void; // type used for the internal mValue array - void means missing
4020  static constexpr bool FIXED_SIZE = true;
4021 
4022  CoordT mBBoxMin; // 12B.
4023  uint8_t mBBoxDif[3]; // 3B.
4024  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4025  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4026  uint64_t mPadding[2]; // 16B padding to 32B alignment
4027 
4028  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4029  __hostdev__ static bool hasStats() { return false; }
4030  __hostdev__ static constexpr uint32_t padding()
4031  {
4032  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4033  }
4034 
4035  __hostdev__ bool getValue(uint32_t i) const { return mValueMask.isOn(i); }
4036  __hostdev__ bool getMin() const { return false; } // dummy
4037  __hostdev__ bool getMax() const { return false; } // dummy
4038  __hostdev__ bool getAvg() const { return false; } // dummy
4039  __hostdev__ bool getDev() const { return false; } // dummy
4040  __hostdev__ void setValue(uint32_t offset, bool) { mValueMask.setOn(offset); }
4041  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4042  __hostdev__ void setMin(const ValueType&) {} // no-op
4043  __hostdev__ void setMax(const ValueType&) {} // no-op
4044  __hostdev__ void setAvg(const FloatType&) {} // no-op
4045  __hostdev__ void setDev(const FloatType&) {} // no-op
4046 
4047  template<typename T>
4048  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4049 
4050  /// @brief This class cannot be constructed or deleted
4051  LeafData() = delete;
4052  LeafData(const LeafData&) = delete;
4053  LeafData& operator=(const LeafData&) = delete;
4054  ~LeafData() = delete;
4055 }; // LeafData<ValueMask>
4056 
4057 // --------------------------> LeafIndexBase <------------------------------------
4058 
4059 // Partial template specialization of LeafData with ValueIndex
4060 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4061 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafIndexBase
4062 {
4063  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4064  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4065  using ValueType = uint64_t;
4066  using FloatType = uint64_t;
4067  using ArrayType = void; // type used for the internal mValue array - void means missing
4068  static constexpr bool FIXED_SIZE = true;
4069 
4070  CoordT mBBoxMin; // 12B.
4071  uint8_t mBBoxDif[3]; // 3B.
4072  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4073  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4074  uint64_t mOffset, mPrefixSum; // 8B offset to first value in this leaf node and 9-bit prefix sum
4075  __hostdev__ static constexpr uint32_t padding()
4076  {
4077  return sizeof(LeafIndexBase) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u);
4078  }
4079  __hostdev__ static uint64_t memUsage() { return sizeof(LeafIndexBase); }
4080  __hostdev__ bool hasStats() const { return mFlags & (uint8_t(1) << 4); }
4081  // return the offset to the first value indexed by this leaf node
4082  __hostdev__ const uint64_t& firstOffset() const { return mOffset; }
4083  __hostdev__ void setMin(const ValueType&) {} // no-op
4084  __hostdev__ void setMax(const ValueType&) {} // no-op
4085  __hostdev__ void setAvg(const FloatType&) {} // no-op
4086  __hostdev__ void setDev(const FloatType&) {} // no-op
4087  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4088  template<typename T>
4089  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4090 
4091 protected:
4092  /// @brief This class should be used as an abstract class and only constructed or deleted via child classes
4093  LeafIndexBase() = default;
4094  LeafIndexBase(const LeafIndexBase&) = default;
4095  LeafIndexBase& operator=(const LeafIndexBase&) = default;
4096  ~LeafIndexBase() = default;
4097 }; // LeafIndexBase
4098 
4099 // --------------------------> LeafData<ValueIndex> <------------------------------------
4100 
4101 // Partial template specialization of LeafData with ValueIndex
4102 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4103 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueIndex, CoordT, MaskT, LOG2DIM>
4104  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4105 {
4108  // return the total number of values indexed by this leaf node, excluding the optional 4 stats
4109  __hostdev__ static uint32_t valueCount() { return uint32_t(512); } // 8^3 = 2^9
4110  // return the offset to the last value indexed by this leaf node (disregarding optional stats)
4111  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + 511u; } // 2^9 - 1
4112  // if stats are available, they are always placed after the last voxel value in this leaf node
4113  __hostdev__ uint64_t getMin() const { return this->hasStats() ? BaseT::mOffset + 512u : 0u; }
4114  __hostdev__ uint64_t getMax() const { return this->hasStats() ? BaseT::mOffset + 513u : 0u; }
4115  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? BaseT::mOffset + 514u : 0u; }
4116  __hostdev__ uint64_t getDev() const { return this->hasStats() ? BaseT::mOffset + 515u : 0u; }
4117  __hostdev__ uint64_t getValue(uint32_t i) const { return BaseT::mOffset + i; } // dense leaf node with active and inactive voxels
4118 }; // LeafData<ValueIndex>
4119 
4120 // --------------------------> LeafData<ValueOnIndex> <------------------------------------
4121 
4122 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4123 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<ValueOnIndex, CoordT, MaskT, LOG2DIM>
4124  : public LeafIndexBase<CoordT, MaskT, LOG2DIM>
4125 {
4128  __hostdev__ uint32_t valueCount() const
4129  {
4130  return util::countOn(BaseT::mValueMask.words()[7]) + (BaseT::mPrefixSum >> 54u & 511u); // last 9 bits of mPrefixSum do not account for the last word in mValueMask
4131  }
4132  __hostdev__ uint64_t lastOffset() const { return BaseT::mOffset + this->valueCount() - 1u; }
4133  __hostdev__ uint64_t getMin() const { return this->hasStats() ? this->lastOffset() + 1u : 0u; }
4134  __hostdev__ uint64_t getMax() const { return this->hasStats() ? this->lastOffset() + 2u : 0u; }
4135  __hostdev__ uint64_t getAvg() const { return this->hasStats() ? this->lastOffset() + 3u : 0u; }
4136  __hostdev__ uint64_t getDev() const { return this->hasStats() ? this->lastOffset() + 4u : 0u; }
4137  __hostdev__ uint64_t getValue(uint32_t i) const
4138  {
4139  //return mValueMask.isOn(i) ? mOffset + mValueMask.countOn(i) : 0u;// for debugging
4140  uint32_t n = i >> 6;
4141  const uint64_t w = BaseT::mValueMask.words()[n], mask = uint64_t(1) << (i & 63u);
4142  if (!(w & mask)) return uint64_t(0); // if i'th value is inactive return offset to background value
4143  uint64_t sum = BaseT::mOffset + util::countOn(w & (mask - 1u));
4144  if (n--) sum += BaseT::mPrefixSum >> (9u * n) & 511u;
4145  return sum;
4146  }
4147 }; // LeafData<ValueOnIndex>
4148 
4149 // --------------------------> LeafData<Point> <------------------------------------
4150 
4151 template<typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4152 struct NANOVDB_ALIGN(NANOVDB_DATA_ALIGNMENT) LeafData<Point, CoordT, MaskT, LOG2DIM>
4153 {
4154  static_assert(sizeof(CoordT) == sizeof(Coord), "Mismatching sizeof");
4155  static_assert(sizeof(MaskT<LOG2DIM>) == sizeof(Mask<LOG2DIM>), "Mismatching sizeof");
4156  using ValueType = uint64_t;
4157  using BuildType = Point;
4159  using ArrayType = uint16_t; // type used for the internal mValue array
4160  static constexpr bool FIXED_SIZE = true;
4161 
4162  CoordT mBBoxMin; // 12B.
4163  uint8_t mBBoxDif[3]; // 3B.
4164  uint8_t mFlags; // 1B. bit0: skip render?, bit1: has bbox?, bit3: unused, bit4: has stats, bits5,6,7: bit-width for FpN
4165  MaskT<LOG2DIM> mValueMask; // LOG2DIM(3): 64B.
4166 
4167  uint64_t mOffset; // 8B
4168  uint64_t mPointCount; // 8B
4169  alignas(32) uint16_t mValues[1u << 3 * LOG2DIM]; // 1KB
4170  // no padding
4171 
4172  /// @brief Return padding of this class in bytes, due to aliasing and 32B alignment
4173  ///
4174  /// @note The extra bytes are not necessarily at the end, but can come from aliasing of individual data members.
4175  __hostdev__ static constexpr uint32_t padding()
4176  {
4177  return sizeof(LeafData) - (12u + 3u + 1u + sizeof(MaskT<LOG2DIM>) + 2 * 8u + (1u << 3 * LOG2DIM) * 2u);
4178  }
4179  __hostdev__ static uint64_t memUsage() { return sizeof(LeafData); }
4180 
4181  __hostdev__ uint64_t offset() const { return mOffset; }
4182  __hostdev__ uint64_t pointCount() const { return mPointCount; }
4183  __hostdev__ uint64_t first(uint32_t i) const { return i ? uint64_t(mValues[i - 1u]) + mOffset : mOffset; }
4184  __hostdev__ uint64_t last(uint32_t i) const { return uint64_t(mValues[i]) + mOffset; }
4185  __hostdev__ uint64_t getValue(uint32_t i) const { return uint64_t(mValues[i]); }
4186  __hostdev__ void setValueOnly(uint32_t offset, uint16_t value) { mValues[offset] = value; }
4187  __hostdev__ void setValue(uint32_t offset, uint16_t value)
4188  {
4189  mValueMask.setOn(offset);
4190  mValues[offset] = value;
4191  }
4192  __hostdev__ void setOn(uint32_t offset) { mValueMask.setOn(offset); }
4193 
4194  __hostdev__ ValueType getMin() const { return mOffset; }
4195  __hostdev__ ValueType getMax() const { return mPointCount; }
4196  __hostdev__ FloatType getAvg() const { return 0.0f; }
4197  __hostdev__ FloatType getDev() const { return 0.0f; }
4198 
4199  __hostdev__ void setMin(const ValueType&) {}
4200  __hostdev__ void setMax(const ValueType&) {}
4201  __hostdev__ void setAvg(const FloatType&) {}
4202  __hostdev__ void setDev(const FloatType&) {}
4203 
4204  template<typename T>
4205  __hostdev__ void setOrigin(const T& ijk) { mBBoxMin = ijk; }
4206 
4207  /// @brief This class cannot be constructed or deleted
4208  LeafData() = delete;
4209  LeafData(const LeafData&) = delete;
4210  LeafData& operator=(const LeafData&) = delete;
4211  ~LeafData() = delete;
4212 }; // LeafData<Point>
4213 
4214 // --------------------------> LeafNode<T> <------------------------------------
4215 
4216 /// @brief Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
4217 template<typename BuildT,
4218  typename CoordT = Coord,
4219  template<uint32_t> class MaskT = Mask,
4220  uint32_t Log2Dim = 3>
4221 class LeafNode : public LeafData<BuildT, CoordT, MaskT, Log2Dim>
4222 {
4223 public:
4225  {
4226  static constexpr uint32_t TOTAL = 0;
4227  static constexpr uint32_t DIM = 1;
4228  __hostdev__ static uint32_t dim() { return 1u; }
4229  }; // Voxel
4232  using ValueType = typename DataType::ValueType;
4233  using FloatType = typename DataType::FloatType;
4234  using BuildType = typename DataType::BuildType;
4235  using CoordType = CoordT;
4236  static constexpr bool FIXED_SIZE = DataType::FIXED_SIZE;
4237  template<uint32_t LOG2>
4238  using MaskType = MaskT<LOG2>;
4239  template<bool ON>
4240  using MaskIterT = typename Mask<Log2Dim>::template Iterator<ON>;
4241 
4242  /// @brief Visits all active values in a leaf node
4243  class ValueOnIterator : public MaskIterT<true>
4244  {
4245  using BaseT = MaskIterT<true>;
4246  const LeafNode* mParent;
4247 
4248  public:
4250  : BaseT()
4251  , mParent(nullptr)
4252  {
4253  }
4255  : BaseT(parent->data()->mValueMask.beginOn())
4256  , mParent(parent)
4257  {
4258  }
4259  ValueOnIterator& operator=(const ValueOnIterator&) = default;
4261  {
4262  NANOVDB_ASSERT(*this);
4263  return mParent->getValue(BaseT::pos());
4264  }
4265  __hostdev__ CoordT getCoord() const
4266  {
4267  NANOVDB_ASSERT(*this);
4268  return mParent->offsetToGlobalCoord(BaseT::pos());
4269  }
4270  }; // Member class ValueOnIterator
4271 
4272  __hostdev__ ValueOnIterator beginValueOn() const { return ValueOnIterator(this); }
4273  __hostdev__ ValueOnIterator cbeginValueOn() const { return ValueOnIterator(this); }
4274 
4275  /// @brief Visits all inactive values in a leaf node
4276  class ValueOffIterator : public MaskIterT<false>
4277  {
4278  using BaseT = MaskIterT<false>;
4279  const LeafNode* mParent;
4280 
4281  public:
4283  : BaseT()
4284  , mParent(nullptr)
4285  {
4286  }
4288  : BaseT(parent->data()->mValueMask.beginOff())
4289  , mParent(parent)
4290  {
4291  }
4292  ValueOffIterator& operator=(const ValueOffIterator&) = default;
4294  {
4295  NANOVDB_ASSERT(*this);
4296  return mParent->getValue(BaseT::pos());
4297  }
4298  __hostdev__ CoordT getCoord() const
4299  {
4300  NANOVDB_ASSERT(*this);
4301  return mParent->offsetToGlobalCoord(BaseT::pos());
4302  }
4303  }; // Member class ValueOffIterator
4304 
4305  __hostdev__ ValueOffIterator beginValueOff() const { return ValueOffIterator(this); }
4306  __hostdev__ ValueOffIterator cbeginValueOff() const { return ValueOffIterator(this); }
4307 
4308  /// @brief Visits all values in a leaf node, i.e. both active and inactive values
4310  {
4311  const LeafNode* mParent;
4312  uint32_t mPos;
4313 
4314  public:
4316  : mParent(nullptr)
4317  , mPos(1u << 3 * Log2Dim)
4318  {
4319  }
4321  : mParent(parent)
4322  , mPos(0)
4323  {
4324  NANOVDB_ASSERT(parent);
4325  }
4326  ValueIterator& operator=(const ValueIterator&) = default;
4328  {
4329  NANOVDB_ASSERT(*this);
4330  return mParent->getValue(mPos);
4331  }
4332  __hostdev__ CoordT getCoord() const
4333  {
4334  NANOVDB_ASSERT(*this);
4335  return mParent->offsetToGlobalCoord(mPos);
4336  }
4337  __hostdev__ bool isActive() const
4338  {
4339  NANOVDB_ASSERT(*this);
4340  return mParent->isActive(mPos);
4341  }
4342  __hostdev__ operator bool() const { return mPos < (1u << 3 * Log2Dim); }
4344  {
4345  ++mPos;
4346  return *this;
4347  }
4349  {
4350  auto tmp = *this;
4351  ++(*this);
4352  return tmp;
4353  }
4354  }; // Member class ValueIterator
4355 
4356  __hostdev__ ValueIterator beginValue() const { return ValueIterator(this); }
4357  __hostdev__ ValueIterator cbeginValueAll() const { return ValueIterator(this); }
4358 
4359  static_assert(util::is_same<ValueType, typename BuildToValueMap<BuildType>::Type>::value, "Mismatching BuildType");
4360  static constexpr uint32_t LOG2DIM = Log2Dim;
4361  static constexpr uint32_t TOTAL = LOG2DIM; // needed by parent nodes
4362  static constexpr uint32_t DIM = 1u << TOTAL; // number of voxels along each axis of this node
4363  static constexpr uint32_t SIZE = 1u << 3 * LOG2DIM; // total number of voxels represented by this node
4364  static constexpr uint32_t MASK = (1u << LOG2DIM) - 1u; // mask for bit operations
4365  static constexpr uint32_t LEVEL = 0; // level 0 = leaf
4366  static constexpr uint64_t NUM_VALUES = uint64_t(1) << (3 * TOTAL); // total voxel count represented by this node
4367 
4368  __hostdev__ DataType* data() { return reinterpret_cast<DataType*>(this); }
4369 
4370  __hostdev__ const DataType* data() const { return reinterpret_cast<const DataType*>(this); }
4371 
4372  /// @brief Return a const reference to the bit mask of active voxels in this leaf node
4373  __hostdev__ const MaskType<LOG2DIM>& valueMask() const { return DataType::mValueMask; }
4374  __hostdev__ const MaskType<LOG2DIM>& getValueMask() const { return DataType::mValueMask; }
4375 
4376  /// @brief Return a const reference to the minimum active value encoded in this leaf node
4377  __hostdev__ ValueType minimum() const { return DataType::getMin(); }
4378 
4379  /// @brief Return a const reference to the maximum active value encoded in this leaf node
4380  __hostdev__ ValueType maximum() const { return DataType::getMax(); }
4381 
4382  /// @brief Return a const reference to the average of all the active values encoded in this leaf node
4383  __hostdev__ FloatType average() const { return DataType::getAvg(); }
4384 
4385  /// @brief Return the variance of all the active values encoded in this leaf node
4386  __hostdev__ FloatType variance() const { return Pow2(DataType::getDev()); }
4387 
4388  /// @brief Return a const reference to the standard deviation of all the active values encoded in this leaf node
4389  __hostdev__ FloatType stdDeviation() const { return DataType::getDev(); }
4390 
4391  __hostdev__ uint8_t flags() const { return DataType::mFlags; }
4392 
4393  /// @brief Return the origin in index space of this leaf node
4394  __hostdev__ CoordT origin() const { return DataType::mBBoxMin & ~MASK; }
4395 
4396  /// @brief Compute the local coordinates from a linear offset
4397  /// @param n Linear offset into this nodes dense table
4398  /// @return Local (vs global) 3D coordinates
4399  __hostdev__ static CoordT OffsetToLocalCoord(uint32_t n)
4400  {
4401  NANOVDB_ASSERT(n < SIZE);
4402  const uint32_t m = n & ((1 << 2 * LOG2DIM) - 1);
4403  return CoordT(n >> 2 * LOG2DIM, m >> LOG2DIM, m & MASK);
4404  }
4405 
4406  /// @brief Converts (in place) a local index coordinate to a global index coordinate
4407  __hostdev__ void localToGlobalCoord(Coord& ijk) const { ijk += this->origin(); }
4408 
4409  __hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
4410  {
4411  return OffsetToLocalCoord(n) + this->origin();
4412  }
4413 
4414  /// @brief Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!)
4415  __hostdev__ static uint32_t dim() { return 1u << LOG2DIM; }
4416 
4417  /// @brief Return the bounding box in index space of active values in this leaf node
4418  __hostdev__ math::BBox<CoordT> bbox() const
4419  {
4420  math::BBox<CoordT> bbox(DataType::mBBoxMin, DataType::mBBoxMin);
4421  if (this->hasBBox()) {
4422  bbox.max()[0] += DataType::mBBoxDif[0];
4423  bbox.max()[1] += DataType::mBBoxDif[1];
4424  bbox.max()[2] += DataType::mBBoxDif[2];
4425  } else { // very rare case
4426  bbox = math::BBox<CoordT>(); // invalid
4427  }
4428  return bbox;
4429  }
4430 
4431  /// @brief Return the total number of voxels (e.g. values) encoded in this leaf node
4432  __hostdev__ static uint32_t voxelCount() { return 1u << (3 * LOG2DIM); }
4433 
4434  __hostdev__ static uint32_t padding() { return DataType::padding(); }
4435 
4436  /// @brief return memory usage in bytes for the leaf node
4437  __hostdev__ uint64_t memUsage() const { return DataType::memUsage(); }
4438 
4439  /// @brief This class cannot be constructed or deleted
4440  LeafNode() = delete;
4441  LeafNode(const LeafNode&) = delete;
4442  LeafNode& operator=(const LeafNode&) = delete;
4443  ~LeafNode() = delete;
4444 
4445  /// @brief Return the voxel value at the given offset.
4446  __hostdev__ ValueType getValue(uint32_t offset) const { return DataType::getValue(offset); }
4447 
4448  /// @brief Return the voxel value at the given coordinate.
4449  __hostdev__ ValueType getValue(const CoordT& ijk) const { return DataType::getValue(CoordToOffset(ijk)); }
4450 
4451  /// @brief Return the first value in this leaf node.
4452  __hostdev__ ValueType getFirstValue() const { return this->getValue(0); }
4453  /// @brief Return the last value in this leaf node.
4454  __hostdev__ ValueType getLastValue() const { return this->getValue(SIZE - 1); }
4455 
4456  /// @brief Sets the value at the specified location and activate its state.
4457  ///
4458  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4459  __hostdev__ void setValue(const CoordT& ijk, const ValueType& v) { DataType::setValue(CoordToOffset(ijk), v); }
4460 
4461  /// @brief Sets the value at the specified location but leaves its state unchanged.
4462  ///
4463  /// @note This is safe since it does not change the topology of the tree (unlike setValue methods on the other nodes)
4464  __hostdev__ void setValueOnly(uint32_t offset, const ValueType& v) { DataType::setValueOnly(offset, v); }
4465  __hostdev__ void setValueOnly(const CoordT& ijk, const ValueType& v) { DataType::setValueOnly(CoordToOffset(ijk), v); }
4466 
4467  /// @brief Return @c true if the voxel value at the given coordinate is active.
4468  __hostdev__ bool isActive(const CoordT& ijk) const { return DataType::mValueMask.isOn(CoordToOffset(ijk)); }
4469  __hostdev__ bool isActive(uint32_t n) const { return DataType::mValueMask.isOn(n); }
4470 
4471  /// @brief Return @c true if any of the voxel value are active in this leaf node.
4472  __hostdev__ bool isActive() const
4473  {
4474  //NANOVDB_ASSERT( bool(DataType::mFlags & uint8_t(2)) != DataType::mValueMask.isOff() );
4475  //return DataType::mFlags & uint8_t(2);
4476  return !DataType::mValueMask.isOff();
4477  }
4478 
4479  __hostdev__ bool hasBBox() const { return DataType::mFlags & uint8_t(2); }
4480 
4481  /// @brief Return @c true if the voxel value at the given coordinate is active and updates @c v with the value.
4482  __hostdev__ bool probeValue(const CoordT& ijk, ValueType& v) const
4483  {
4484  const uint32_t n = CoordToOffset(ijk);
4485  v = DataType::getValue(n);
4486  return DataType::mValueMask.isOn(n);
4487  }
4488 
4489  __hostdev__ const LeafNode* probeLeaf(const CoordT&) const { return this; }
4490 
4491  /// @brief Return the linear offset corresponding to the given coordinate
4492  __hostdev__ static uint32_t CoordToOffset(const CoordT& ijk)
4493  {
4494  return ((ijk[0] & MASK) << (2 * LOG2DIM)) | ((ijk[1] & MASK) << LOG2DIM) | (ijk[2] & MASK);
4495  }
4496 
4497  /// @brief Updates the local bounding box of active voxels in this node. Return true if bbox was updated.
4498  ///
4499  /// @warning It assumes that the origin and value mask have already been set.
4500  ///
4501  /// @details This method is based on few (intrinsic) bit operations and hence is relatively fast.
4502  /// However, it should only only be called if either the value mask has changed or if the
4503  /// active bounding box is still undefined. e.g. during construction of this node.
4504  __hostdev__ bool updateBBox();
4505 
4506  template<typename OpT, typename... ArgsT>
4507  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4508  {
4509  return OpT::get(*this, CoordToOffset(ijk), args...);
4510  }
4511 
4512  template<typename OpT, typename... ArgsT>
4513  __hostdev__ auto get(const uint32_t n, ArgsT&&... args) const
4514  {
4515  return OpT::get(*this, n, args...);
4516  }
4517 
4518  template<typename OpT, typename... ArgsT>
4519  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args)
4520  {
4521  return OpT::set(*this, CoordToOffset(ijk), args...);
4522  }
4523 
4524  template<typename OpT, typename... ArgsT>
4525  __hostdev__ auto set(const uint32_t n, ArgsT&&... args)
4526  {
4527  return OpT::set(*this, n, args...);
4528  }
4529 
4530 private:
4531  static_assert(sizeof(DataType) % NANOVDB_DATA_ALIGNMENT == 0, "sizeof(LeafData) is misaligned");
4532 
4533  template<typename, int, int, int>
4534  friend class ReadAccessor;
4535 
4536  template<typename>
4537  friend class RootNode;
4538  template<typename, uint32_t>
4539  friend class InternalNode;
4540 
4541  template<typename RayT, typename AccT>
4542  __hostdev__ uint32_t getDimAndCache(const CoordT&, const RayT& /*ray*/, const AccT&) const
4543  {
4544  if (DataType::mFlags & uint8_t(1u))
4545  return this->dim(); // skip this node if the 1st bit is set
4546 
4547  //if (!ray.intersects( this->bbox() )) return 1 << LOG2DIM;
4548  return ChildNodeType::dim();
4549  }
4550 
4551  template<typename OpT, typename AccT, typename... ArgsT>
4552  __hostdev__ auto
4553  //__hostdev__ decltype(OpT::get(util::declval<const LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4554  getAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args) const
4555  {
4556  return OpT::get(*this, CoordToOffset(ijk), args...);
4557  }
4558 
4559  template<typename OpT, typename AccT, typename... ArgsT>
4560  //__hostdev__ auto // occasionally fails with NVCC
4561  __hostdev__ decltype(OpT::set(util::declval<LeafNode&>(), util::declval<uint32_t>(), util::declval<ArgsT>()...))
4562  setAndCache(const CoordType& ijk, const AccT&, ArgsT&&... args)
4563  {
4564  return OpT::set(*this, CoordToOffset(ijk), args...);
4565  }
4566 
4567 }; // LeafNode class
4568 
4569 // --------------------------> LeafNode<T>::updateBBox <------------------------------------
4570 
4571 template<typename ValueT, typename CoordT, template<uint32_t> class MaskT, uint32_t LOG2DIM>
4573 {
4574  static_assert(LOG2DIM == 3, "LeafNode::updateBBox: only supports LOGDIM = 3!");
4575  if (DataType::mValueMask.isOff()) {
4576  DataType::mFlags &= ~uint8_t(2); // set 2nd bit off, which indicates that this nodes has no bbox
4577  return false;
4578  }
4579  auto update = [&](uint32_t min, uint32_t max, int axis) {
4580  NANOVDB_ASSERT(min <= max && max < 8);
4581  DataType::mBBoxMin[axis] = (DataType::mBBoxMin[axis] & ~MASK) + int(min);
4582  DataType::mBBoxDif[axis] = uint8_t(max - min);
4583  };
4584  uint64_t *w = DataType::mValueMask.words(), word64 = *w;
4585  uint32_t Xmin = word64 ? 0u : 8u, Xmax = Xmin;
4586  for (int i = 1; i < 8; ++i) { // last loop over 7 remaining 64 bit words
4587  if (w[i]) { // skip if word has no set bits
4588  word64 |= w[i]; // union 8 x 64 bits words into one 64 bit word
4589  if (Xmin == 8)
4590  Xmin = i; // only set once
4591  Xmax = i;
4592  }
4593  }
4594  NANOVDB_ASSERT(word64);
4595  update(Xmin, Xmax, 0);
4596  update(util::findLowestOn(word64) >> 3, util::findHighestOn(word64) >> 3, 1);
4597  const uint32_t *p = reinterpret_cast<const uint32_t*>(&word64), word32 = p[0] | p[1];
4598  const uint16_t *q = reinterpret_cast<const uint16_t*>(&word32), word16 = q[0] | q[1];
4599  const uint8_t *b = reinterpret_cast<const uint8_t*>(&word16), byte = b[0] | b[1];
4600  NANOVDB_ASSERT(byte);
4601  update(util::findLowestOn(static_cast<uint32_t>(byte)), util::findHighestOn(static_cast<uint32_t>(byte)), 2);
4602  DataType::mFlags |= uint8_t(2); // set 2nd bit on, which indicates that this nodes has a bbox
4603  return true;
4604 } // LeafNode::updateBBox
4605 
4606 // --------------------------> Template specializations and traits <------------------------------------
4607 
4608 /// @brief Template specializations to the default configuration used in OpenVDB:
4609 /// Root -> 32^3 -> 16^3 -> 8^3
4610 template<typename BuildT>
4612 template<typename BuildT>
4614 template<typename BuildT>
4616 template<typename BuildT>
4618 template<typename BuildT>
4620 template<typename BuildT>
4622 
4623 /// @brief Trait to map from LEVEL to node type
4624 template<typename BuildT, int LEVEL>
4625 struct NanoNode;
4626 
4627 // Partial template specialization of above Node struct
4628 template<typename BuildT>
4629 struct NanoNode<BuildT, 0>
4630 {
4633 };
4634 template<typename BuildT>
4635 struct NanoNode<BuildT, 1>
4636 {
4639 };
4640 template<typename BuildT>
4641 struct NanoNode<BuildT, 2>
4642 {
4645 };
4646 template<typename BuildT>
4647 struct NanoNode<BuildT, 3>
4648 {
4651 };
4652 
4653 template<typename BuildT, int LEVEL>
4655 
4674 
4694 
4695 // --------------------------> callNanoGrid <------------------------------------
4696 
4697 /**
4698 * @brief Below is an example of the struct used for generic programming with callNanoGrid
4699 * @details For an example see "struct Crc32TailOld" in nanovdb/tools/GridChecksum.h or
4700 * "struct IsNanoGridValid" in nanovdb/tools/GridValidator.h
4701 * @code
4702 * struct OpT {
4703  // define these two static functions with non-const GridData
4704 * template <typename BuildT>
4705 * static auto known( GridData *gridData, args...);
4706 * static auto unknown( GridData *gridData, args...);
4707 * // or alternatively these two static functions with const GridData
4708 * template <typename BuildT>
4709 * static auto known(const GridData *gridData, args...);
4710 * static auto unknown(const GridData *gridData, args...);
4711 * };
4712 * @endcode
4713 *
4714 * @brief Here is an example of how to use callNanoGrid in client code
4715 * @code
4716 * return callNanoGrid<OpT>(gridData, args...);
4717 * @endcode
4718 */
4719 
4720 /// @brief Use this function, which depends a pointer to GridData, to call
4721 /// other functions that depend on a NanoGrid of a known ValueType.
4722 /// @details This function allows for generic programming by converting GridData
4723 /// to a NanoGrid of the type encoded in GridData::mGridType.
4724 template<typename OpT, typename GridDataT, typename... ArgsT>
4725 auto callNanoGrid(GridDataT *gridData, ArgsT&&... args)
4726 {
4727  static_assert(util::is_same<GridDataT, GridData, const GridData>::value, "Expected gridData to be of type GridData* or const GridData*");
4728  switch (gridData->mGridType){
4729  case GridType::Float:
4730  return OpT::template known<float>(gridData, args...);
4731  case GridType::Double:
4732  return OpT::template known<double>(gridData, args...);
4733  case GridType::Int16:
4734  return OpT::template known<int16_t>(gridData, args...);
4735  case GridType::Int32:
4736  return OpT::template known<int32_t>(gridData, args...);
4737  case GridType::Int64:
4738  return OpT::template known<int64_t>(gridData, args...);
4739  case GridType::Vec3f:
4740  return OpT::template known<Vec3f>(gridData, args...);
4741  case GridType::Vec3d:
4742  return OpT::template known<Vec3d>(gridData, args...);
4743  case GridType::UInt32:
4744  return OpT::template known<uint32_t>(gridData, args...);
4745  case GridType::Mask:
4746  return OpT::template known<ValueMask>(gridData, args...);
4747  case GridType::Index:
4748  return OpT::template known<ValueIndex>(gridData, args...);
4749  case GridType::OnIndex:
4750  return OpT::template known<ValueOnIndex>(gridData, args...);
4751  case GridType::Boolean:
4752  return OpT::template known<bool>(gridData, args...);
4753  case GridType::RGBA8:
4754  return OpT::template known<math::Rgba8>(gridData, args...);
4755  case GridType::Fp4:
4756  return OpT::template known<Fp4>(gridData, args...);
4757  case GridType::Fp8:
4758  return OpT::template known<Fp8>(gridData, args...);
4759  case GridType::Fp16:
4760  return OpT::template known<Fp16>(gridData, args...);
4761  case GridType::FpN:
4762  return OpT::template known<FpN>(gridData, args...);
4763  case GridType::Vec4f:
4764  return OpT::template known<Vec4f>(gridData, args...);
4765  case GridType::Vec4d:
4766  return OpT::template known<Vec4d>(gridData, args...);
4767  case GridType::UInt8:
4768  return OpT::template known<uint8_t>(gridData, args...);
4769  default:
4770  return OpT::unknown(gridData, args...);
4771  }
4772 }// callNanoGrid
4773 
4774 // --------------------------> ReadAccessor <------------------------------------
4775 
4776 /// @brief A read-only value accessor with three levels of node caching. This allows for
4777 /// inverse tree traversal during lookup, which is on average significantly faster
4778 /// than calling the equivalent method on the tree (i.e. top-down traversal).
4779 ///
4780 /// @note By virtue of the fact that a value accessor accelerates random access operations
4781 /// by re-using cached access patterns, this access should be reused for multiple access
4782 /// operations. In other words, never create an instance of this accessor for a single
4783 /// access only. In general avoid single access operations with this accessor, and
4784 /// if that is not possible call the corresponding method on the tree instead.
4785 ///
4786 /// @warning Since this ReadAccessor internally caches raw pointers to the nodes of the tree
4787 /// structure, it is not safe to copy between host and device, or even to share among
4788 /// multiple threads on the same host or device. However, it is light-weight so simple
4789 /// instantiate one per thread (on the host and/or device).
4790 ///
4791 /// @details Used to accelerated random access into a VDB tree. Provides on average
4792 /// O(1) random access operations by means of inverse tree traversal,
4793 /// which amortizes the non-const time complexity of the root node.
4794 
4795 template<typename BuildT>
4796 class ReadAccessor<BuildT, -1, -1, -1>
4797 {
4798  using GridT = NanoGrid<BuildT>; // grid
4799  using TreeT = NanoTree<BuildT>; // tree
4800  using RootT = NanoRoot<BuildT>; // root node
4801  using LeafT = NanoLeaf<BuildT>; // Leaf node
4802  using FloatType = typename RootT::FloatType;
4803  using CoordValueType = typename RootT::CoordType::ValueType;
4804 
4805  mutable const RootT* mRoot; // 8 bytes (mutable to allow for access methods to be const)
4806 public:
4807  using BuildType = BuildT;
4808  using ValueType = typename RootT::ValueType;
4809  using CoordType = typename RootT::CoordType;
4810 
4811  static const int CacheLevels = 0;
4812 
4813  /// @brief Constructor from a root node
4815  : mRoot{&root}
4816  {
4817  }
4818 
4819  /// @brief Constructor from a grid
4820  __hostdev__ ReadAccessor(const GridT& grid)
4821  : ReadAccessor(grid.tree().root())
4822  {
4823  }
4824 
4825  /// @brief Constructor from a tree
4827  : ReadAccessor(tree.root())
4828  {
4829  }
4830 
4831  /// @brief Reset this access to its initial state, i.e. with an empty cache
4832  /// @node Noop since this template specialization has no cache
4833  __hostdev__ void clear() {}
4834 
4835  __hostdev__ const RootT& root() const { return *mRoot; }
4836 
4837  /// @brief Defaults constructors
4838  ReadAccessor(const ReadAccessor&) = default;
4839  ~ReadAccessor() = default;
4840  ReadAccessor& operator=(const ReadAccessor&) = default;
4842  {
4843  return this->template get<GetValue<BuildT>>(ijk);
4844  }
4845  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4846  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4847  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4848  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4849  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4850  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4851  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4852  template<typename RayT>
4853  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
4854  {
4855  return mRoot->getDimAndCache(ijk, ray, *this);
4856  }
4857  template<typename OpT, typename... ArgsT>
4858  __hostdev__ auto get(const CoordType& ijk, ArgsT&&... args) const
4859  {
4860  return mRoot->template get<OpT>(ijk, args...);
4861  }
4862 
4863  template<typename OpT, typename... ArgsT>
4864  __hostdev__ auto set(const CoordType& ijk, ArgsT&&... args) const
4865  {
4866  return const_cast<RootT*>(mRoot)->template set<OpT>(ijk, args...);
4867  }
4868 
4869 private:
4870  /// @brief Allow nodes to insert themselves into the cache.
4871  template<typename>
4872  friend class RootNode;
4873  template<typename, uint32_t>
4874  friend class InternalNode;
4875  template<typename, typename, template<uint32_t> class, uint32_t>
4876  friend class LeafNode;
4877 
4878  /// @brief No-op
4879  template<typename NodeT>
4880  __hostdev__ void insert(const CoordType&, const NodeT*) const {}
4881 }; // ReadAccessor<ValueT, -1, -1, -1> class
4882 
4883 /// @brief Node caching at a single tree level
4884 template<typename BuildT, int LEVEL0>
4885 class ReadAccessor<BuildT, LEVEL0, -1, -1> //e.g. 0, 1, 2
4886 {
4887  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 should be 0, 1, or 2");
4888 
4889  using GridT = NanoGrid<BuildT>; // grid
4890  using TreeT = NanoTree<BuildT>;
4891  using RootT = NanoRoot<BuildT>; // root node
4892  using LeafT = NanoLeaf<BuildT>; // Leaf node
4893  using NodeT = typename NodeTrait<TreeT, LEVEL0>::type;
4894  using CoordT = typename RootT::CoordType;
4895  using ValueT = typename RootT::ValueType;
4896 
4897  using FloatType = typename RootT::FloatType;
4898  using CoordValueType = typename RootT::CoordT::ValueType;
4899 
4900  // All member data are mutable to allow for access methods to be const
4901  mutable CoordT mKey; // 3*4 = 12 bytes
4902  mutable const RootT* mRoot; // 8 bytes
4903  mutable const NodeT* mNode; // 8 bytes
4904 
4905 public:
4906  using BuildType = BuildT;
4907  using ValueType = ValueT;
4908  using CoordType = CoordT;
4909 
4910  static const int CacheLevels = 1;
4911 
4912  /// @brief Constructor from a root node
4914  : mKey(CoordType::max())
4915  , mRoot(&root)
4916  , mNode(nullptr)
4917  {
4918  }
4919 
4920  /// @brief Constructor from a grid
4922  : ReadAccessor(grid.tree().root())
4923  {
4924  }
4925 
4926  /// @brief Constructor from a tree
4928  : ReadAccessor(tree.root())
4929  {
4930  }
4931 
4932  /// @brief Reset this access to its initial state, i.e. with an empty cache
4934  {
4935  mKey = CoordType::max();
4936  mNode = nullptr;
4937  }
4938 
4939  __hostdev__ const RootT& root() const { return *mRoot; }
4940 
4941  /// @brief Defaults constructors
4942  ReadAccessor(const ReadAccessor&) = default;
4943  ~ReadAccessor() = default;
4944  ReadAccessor& operator=(const ReadAccessor&) = default;
4945 
4946  __hostdev__ bool isCached(const CoordType& ijk) const
4947  {
4948  return (ijk[0] & int32_t(~NodeT::MASK)) == mKey[0] &&
4949  (ijk[1] & int32_t(~NodeT::MASK)) == mKey[1] &&
4950  (ijk[2] & int32_t(~NodeT::MASK)) == mKey[2];
4951  }
4952 
4954  {
4955  return this->template get<GetValue<BuildT>>(ijk);
4956  }
4957  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4958  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
4959  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
4960  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
4961  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
4962  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
4963  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
4964 
4965  template<typename RayT>
4966  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
4967  {
4968  if (this->isCached(ijk)) return mNode->getDimAndCache(ijk, ray, *this);
4969  return mRoot->getDimAndCache(ijk, ray, *this);
4970  }
4971 
4972  template<typename OpT, typename... ArgsT>
4973  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
4974  {
4975  if constexpr(OpT::LEVEL <= LEVEL0) if (this->isCached(ijk)) return mNode->template getAndCache<OpT>(ijk, *this, args...);
4976  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
4977  }
4978 
4979  template<typename OpT, typename... ArgsT>
4980  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
4981  {
4982  if constexpr(OpT::LEVEL <= LEVEL0) if (this->isCached(ijk)) return const_cast<NodeT*>(mNode)->template setAndCache<OpT>(ijk, *this, args...);
4983  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
4984  }
4985 
4986 private:
4987  /// @brief Allow nodes to insert themselves into the cache.
4988  template<typename>
4989  friend class RootNode;
4990  template<typename, uint32_t>
4991  friend class InternalNode;
4992  template<typename, typename, template<uint32_t> class, uint32_t>
4993  friend class LeafNode;
4994 
4995  /// @brief Inserts a leaf node and key pair into this ReadAccessor
4996  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
4997  {
4998  mKey = ijk & ~NodeT::MASK;
4999  mNode = node;
5000  }
5001 
5002  // no-op
5003  template<typename OtherNodeT>
5004  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5005 
5006 }; // ReadAccessor<ValueT, LEVEL0>
5007 
5008 template<typename BuildT, int LEVEL0, int LEVEL1>
5009 class ReadAccessor<BuildT, LEVEL0, LEVEL1, -1> //e.g. (0,1), (1,2), (0,2)
5010 {
5011  static_assert(LEVEL0 >= 0 && LEVEL0 <= 2, "LEVEL0 must be 0, 1, 2");
5012  static_assert(LEVEL1 >= 0 && LEVEL1 <= 2, "LEVEL1 must be 0, 1, 2");
5013  static_assert(LEVEL0 < LEVEL1, "Level 0 must be lower than level 1");
5014  using GridT = NanoGrid<BuildT>; // grid
5015  using TreeT = NanoTree<BuildT>;
5016  using RootT = NanoRoot<BuildT>;
5017  using LeafT = NanoLeaf<BuildT>;
5018  using Node1T = typename NodeTrait<TreeT, LEVEL0>::type;
5019  using Node2T = typename NodeTrait<TreeT, LEVEL1>::type;
5020  using CoordT = typename RootT::CoordType;
5021  using ValueT = typename RootT::ValueType;
5022  using FloatType = typename RootT::FloatType;
5023  using CoordValueType = typename RootT::CoordT::ValueType;
5024 
5025  // All member data are mutable to allow for access methods to be const
5026 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5027  mutable CoordT mKey; // 3*4 = 12 bytes
5028 #else // 68 bytes total
5029  mutable CoordT mKeys[2]; // 2*3*4 = 24 bytes
5030 #endif
5031  mutable const RootT* mRoot;
5032  mutable const Node1T* mNode1;
5033  mutable const Node2T* mNode2;
5034 
5035 public:
5036  using BuildType = BuildT;
5037  using ValueType = ValueT;
5038  using CoordType = CoordT;
5039 
5040  static const int CacheLevels = 2;
5041 
5042  /// @brief Constructor from a root node
5044 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5045  : mKey(CoordType::max())
5046 #else
5047  : mKeys{CoordType::max(), CoordType::max()}
5048 #endif
5049  , mRoot(&root)
5050  , mNode1(nullptr)
5051  , mNode2(nullptr)
5052  {
5053  }
5054 
5055  /// @brief Constructor from a grid
5057  : ReadAccessor(grid.tree().root())
5058  {
5059  }
5060 
5061  /// @brief Constructor from a tree
5063  : ReadAccessor(tree.root())
5064  {
5065  }
5066 
5067  /// @brief Reset this access to its initial state, i.e. with an empty cache
5069  {
5070 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5071  mKey = CoordType::max();
5072 #else
5073  mKeys[0] = mKeys[1] = CoordType::max();
5074 #endif
5075  mNode1 = nullptr;
5076  mNode2 = nullptr;
5077  }
5078 
5079  __hostdev__ const RootT& root() const { return *mRoot; }
5080 
5081  /// @brief Defaults constructors
5082  ReadAccessor(const ReadAccessor&) = default;
5083  ~ReadAccessor() = default;
5084  ReadAccessor& operator=(const ReadAccessor&) = default;
5085 
5086 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5087  __hostdev__ bool isCached1(CoordValueType dirty) const
5088  {
5089  if (!mNode1)
5090  return false;
5091  if (dirty & int32_t(~Node1T::MASK)) {
5092  mNode1 = nullptr;
5093  return false;
5094  }
5095  return true;
5096  }
5097  __hostdev__ bool isCached2(CoordValueType dirty) const
5098  {
5099  if (!mNode2)
5100  return false;
5101  if (dirty & int32_t(~Node2T::MASK)) {
5102  mNode2 = nullptr;
5103  return false;
5104  }
5105  return true;
5106  }
5107  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5108  {
5109  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5110  }
5111 #else
5112  __hostdev__ bool isCached1(const CoordType& ijk) const
5113  {
5114  return (ijk[0] & int32_t(~Node1T::MASK)) == mKeys[0][0] &&
5115  (ijk[1] & int32_t(~Node1T::MASK)) == mKeys[0][1] &&
5116  (ijk[2] & int32_t(~Node1T::MASK)) == mKeys[0][2];
5117  }
5118  __hostdev__ bool isCached2(const CoordType& ijk) const
5119  {
5120  return (ijk[0] & int32_t(~Node2T::MASK)) == mKeys[1][0] &&
5121  (ijk[1] & int32_t(~Node2T::MASK)) == mKeys[1][1] &&
5122  (ijk[2] & int32_t(~Node2T::MASK)) == mKeys[1][2];
5123  }
5124 #endif
5125 
5127  {
5128  return this->template get<GetValue<BuildT>>(ijk);
5129  }
5130  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5131  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5132  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5133  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5134  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5135  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5136  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5137 
5138  template<typename RayT>
5139  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5140  {
5141 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5142  const CoordValueType dirty = this->computeDirty(ijk);
5143 #else
5144  auto&& dirty = ijk;
5145 #endif
5146  if (this->isCached1(dirty)) {
5147  return mNode1->getDimAndCache(ijk, ray, *this);
5148  } else if (this->isCached2(dirty)) {
5149  return mNode2->getDimAndCache(ijk, ray, *this);
5150  }
5151  return mRoot->getDimAndCache(ijk, ray, *this);
5152  }
5153 
5154  template<typename OpT, typename... ArgsT>
5155  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
5156  {
5157 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5158  const CoordValueType dirty = this->computeDirty(ijk);
5159 #else
5160  auto&& dirty = ijk;
5161 #endif
5162  if constexpr(OpT::LEVEL <= LEVEL0) {
5163  if (this->isCached1(dirty)) return mNode1->template getAndCache<OpT>(ijk, *this, args...);
5164  } else if constexpr(OpT::LEVEL <= LEVEL1) {
5165  if (this->isCached2(dirty)) return mNode2->template getAndCache<OpT>(ijk, *this, args...);
5166  }
5167  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5168  }
5169 
5170  template<typename OpT, typename... ArgsT>
5171  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
5172  {
5173 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5174  const CoordValueType dirty = this->computeDirty(ijk);
5175 #else
5176  auto&& dirty = ijk;
5177 #endif
5178  if constexpr(OpT::LEVEL <= LEVEL0) {
5179  if (this->isCached1(dirty)) return const_cast<Node1T*>(mNode1)->template setAndCache<OpT>(ijk, *this, args...);
5180  } else if constexpr(OpT::LEVEL <= LEVEL1) {
5181  if (this->isCached2(dirty)) return const_cast<Node2T*>(mNode2)->template setAndCache<OpT>(ijk, *this, args...);
5182  }
5183  return const_cast<RootT*>(mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5184  }
5185 
5186 private:
5187  /// @brief Allow nodes to insert themselves into the cache.
5188  template<typename>
5189  friend class RootNode;
5190  template<typename, uint32_t>
5191  friend class InternalNode;
5192  template<typename, typename, template<uint32_t> class, uint32_t>
5193  friend class LeafNode;
5194 
5195  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5196  __hostdev__ void insert(const CoordType& ijk, const Node1T* node) const
5197  {
5198 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5199  mKey = ijk;
5200 #else
5201  mKeys[0] = ijk & ~Node1T::MASK;
5202 #endif
5203  mNode1 = node;
5204  }
5205  __hostdev__ void insert(const CoordType& ijk, const Node2T* node) const
5206  {
5207 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5208  mKey = ijk;
5209 #else
5210  mKeys[1] = ijk & ~Node2T::MASK;
5211 #endif
5212  mNode2 = node;
5213  }
5214  template<typename OtherNodeT>
5215  __hostdev__ void insert(const CoordType&, const OtherNodeT*) const {}
5216 }; // ReadAccessor<BuildT, LEVEL0, LEVEL1>
5217 
5218 /// @brief Node caching at all (three) tree levels
5219 template<typename BuildT>
5220 class ReadAccessor<BuildT, 0, 1, 2>
5221 {
5222  using GridT = NanoGrid<BuildT>; // grid
5223  using TreeT = NanoTree<BuildT>;
5224  using RootT = NanoRoot<BuildT>; // root node
5225  using NodeT2 = NanoUpper<BuildT>; // upper internal node
5226  using NodeT1 = NanoLower<BuildT>; // lower internal node
5227  using LeafT = NanoLeaf<BuildT>; // Leaf node
5228  using CoordT = typename RootT::CoordType;
5229  using ValueT = typename RootT::ValueType;
5230 
5231  using FloatType = typename RootT::FloatType;
5232  using CoordValueType = typename RootT::CoordT::ValueType;
5233 
5234  // All member data are mutable to allow for access methods to be const
5235 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY // 44 bytes total
5236  mutable CoordT mKey; // 3*4 = 12 bytes
5237 #else // 68 bytes total
5238  mutable CoordT mKeys[3]; // 3*3*4 = 36 bytes
5239 #endif
5240  mutable const RootT* mRoot;
5241  mutable const void* mNode[3]; // 4*8 = 32 bytes
5242 
5243 public:
5244  using BuildType = BuildT;
5245  using ValueType = ValueT;
5246  using CoordType = CoordT;
5247 
5248  static const int CacheLevels = 3;
5249 
5250  /// @brief Constructor from a root node
5252 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5253  : mKey(CoordType::max())
5254 #else
5256 #endif
5257  , mRoot(&root)
5258  , mNode{nullptr, nullptr, nullptr}
5259  {
5260  }
5261 
5262  /// @brief Constructor from a grid
5263  __hostdev__ ReadAccessor(const GridT& grid)
5264  : ReadAccessor(grid.tree().root())
5265  {
5266  }
5267 
5268  /// @brief Constructor from a tree
5270  : ReadAccessor(tree.root())
5271  {
5272  }
5273 
5274  __hostdev__ const RootT& root() const { return *mRoot; }
5275 
5276  /// @brief Defaults constructors
5277  ReadAccessor(const ReadAccessor&) = default;
5278  ~ReadAccessor() = default;
5279  ReadAccessor& operator=(const ReadAccessor&) = default;
5280 
5281  /// @brief Return a const point to the cached node of the specified type
5282  ///
5283  /// @warning The return value could be NULL.
5284  template<typename NodeT>
5285  __hostdev__ const NodeT* getNode() const
5286  {
5287  using T = typename NodeTrait<TreeT, NodeT::LEVEL>::type;
5288  static_assert(util::is_same<T, NodeT>::value, "ReadAccessor::getNode: Invalid node type");
5289  return reinterpret_cast<const T*>(mNode[NodeT::LEVEL]);
5290  }
5291 
5292  template<int LEVEL>
5294  {
5295  using T = typename NodeTrait<TreeT, LEVEL>::type;
5296  static_assert(LEVEL >= 0 && LEVEL <= 2, "ReadAccessor::getNode: Invalid node type");
5297  return reinterpret_cast<const T*>(mNode[LEVEL]);
5298  }
5299 
5300  /// @brief Reset this access to its initial state, i.e. with an empty cache
5302  {
5303 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5304  mKey = CoordType::max();
5305 #else
5306  mKeys[0] = mKeys[1] = mKeys[2] = CoordType::max();
5307 #endif
5308  mNode[0] = mNode[1] = mNode[2] = nullptr;
5309  }
5310 
5311 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5312  template<typename NodeT>
5313  __hostdev__ bool isCached(CoordValueType dirty) const
5314  {
5315  if (!mNode[NodeT::LEVEL])
5316  return false;
5317  if (dirty & int32_t(~NodeT::MASK)) {
5318  mNode[NodeT::LEVEL] = nullptr;
5319  return false;
5320  }
5321  return true;
5322  }
5323 
5324  __hostdev__ CoordValueType computeDirty(const CoordType& ijk) const
5325  {
5326  return (ijk[0] ^ mKey[0]) | (ijk[1] ^ mKey[1]) | (ijk[2] ^ mKey[2]);
5327  }
5328 #else
5329  template<typename NodeT>
5330  __hostdev__ bool isCached(const CoordType& ijk) const
5331  {
5332  return (ijk[0] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][0] &&
5333  (ijk[1] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][1] &&
5334  (ijk[2] & int32_t(~NodeT::MASK)) == mKeys[NodeT::LEVEL][2];
5335  }
5336 #endif
5337 
5338  __hostdev__ ValueType getValue(const CoordType& ijk) const {return this->template get<GetValue<BuildT>>(ijk);}
5339  __hostdev__ ValueType getValue(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5340  __hostdev__ ValueType operator()(const CoordType& ijk) const { return this->template get<GetValue<BuildT>>(ijk); }
5341  __hostdev__ ValueType operator()(int i, int j, int k) const { return this->template get<GetValue<BuildT>>(CoordType(i, j, k)); }
5342  __hostdev__ auto getNodeInfo(const CoordType& ijk) const { return this->template get<GetNodeInfo<BuildT>>(ijk); }
5343  __hostdev__ bool isActive(const CoordType& ijk) const { return this->template get<GetState<BuildT>>(ijk); }
5344  __hostdev__ bool probeValue(const CoordType& ijk, ValueType& v) const { return this->template get<ProbeValue<BuildT>>(ijk, v); }
5345  __hostdev__ const LeafT* probeLeaf(const CoordType& ijk) const { return this->template get<GetLeaf<BuildT>>(ijk); }
5346 
5347  template<typename OpT, typename... ArgsT>
5348  __hostdev__ typename OpT::Type get(const CoordType& ijk, ArgsT&&... args) const
5349  {
5350 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5351  const CoordValueType dirty = this->computeDirty(ijk);
5352 #else
5353  auto&& dirty = ijk;
5354 #endif
5355  if constexpr(OpT::LEVEL <=0) {
5356  if (this->isCached<LeafT>(dirty)) return ((const LeafT*)mNode[0])->template getAndCache<OpT>(ijk, *this, args...);
5357  } else if constexpr(OpT::LEVEL <= 1) {
5358  if (this->isCached<NodeT1>(dirty)) return ((const NodeT1*)mNode[1])->template getAndCache<OpT>(ijk, *this, args...);
5359  } else if constexpr(OpT::LEVEL <= 2) {
5360  if (this->isCached<NodeT2>(dirty)) return ((const NodeT2*)mNode[2])->template getAndCache<OpT>(ijk, *this, args...);
5361  }
5362  return mRoot->template getAndCache<OpT>(ijk, *this, args...);
5363  }
5364 
5365  template<typename OpT, typename... ArgsT>
5366  __hostdev__ void set(const CoordType& ijk, ArgsT&&... args) const
5367  {
5368 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5369  const CoordValueType dirty = this->computeDirty(ijk);
5370 #else
5371  auto&& dirty = ijk;
5372 #endif
5373  if constexpr(OpT::LEVEL <= 0) {
5374  if (this->isCached<LeafT>(dirty)) return ((LeafT*)mNode[0])->template setAndCache<OpT>(ijk, *this, args...);
5375  } else if constexpr(OpT::LEVEL <= 1) {
5376  if (this->isCached<NodeT1>(dirty)) return ((NodeT1*)mNode[1])->template setAndCache<OpT>(ijk, *this, args...);
5377  } else if constexpr(OpT::LEVEL <= 2) {
5378  if (this->isCached<NodeT2>(dirty)) return ((NodeT2*)mNode[2])->template setAndCache<OpT>(ijk, *this, args...);
5379  }
5380  return ((RootT*)mRoot)->template setAndCache<OpT>(ijk, *this, args...);
5381  }
5382 
5383  template<typename RayT>
5384  __hostdev__ uint32_t getDim(const CoordType& ijk, const RayT& ray) const
5385  {
5386 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5387  const CoordValueType dirty = this->computeDirty(ijk);
5388 #else
5389  auto&& dirty = ijk;
5390 #endif
5391  if (this->isCached<LeafT>(dirty)) {
5392  return ((LeafT*)mNode[0])->getDimAndCache(ijk, ray, *this);
5393  } else if (this->isCached<NodeT1>(dirty)) {
5394  return ((NodeT1*)mNode[1])->getDimAndCache(ijk, ray, *this);
5395  } else if (this->isCached<NodeT2>(dirty)) {
5396  return ((NodeT2*)mNode[2])->getDimAndCache(ijk, ray, *this);
5397  }
5398  return mRoot->getDimAndCache(ijk, ray, *this);
5399  }
5400 
5401 private:
5402  /// @brief Allow nodes to insert themselves into the cache.
5403  template<typename>
5404  friend class RootNode;
5405  template<typename, uint32_t>
5406  friend class InternalNode;
5407  template<typename, typename, template<uint32_t> class, uint32_t>
5408  friend class LeafNode;
5409 
5410  /// @brief Inserts a leaf node and key pair into this ReadAccessor
5411  template<typename NodeT>
5412  __hostdev__ void insert(const CoordType& ijk, const NodeT* node) const
5413  {
5414 #ifdef NANOVDB_USE_SINGLE_ACCESSOR_KEY
5415  mKey = ijk;
5416 #else
5417  mKeys[NodeT::LEVEL] = ijk & ~NodeT::MASK;
5418 #endif
5419  mNode[NodeT::LEVEL] = node;
5420  }
5421 }; // ReadAccessor<BuildT, 0, 1, 2>
5422 
5423 //////////////////////////////////////////////////
5424 
5425 /// @brief Free-standing function for convenient creation of a ReadAccessor with
5426 /// optional and customizable node caching.
5427 ///
5428 /// @details createAccessor<>(grid): No caching of nodes and hence it's thread-safe but slow
5429 /// createAccessor<0>(grid): Caching of leaf nodes only
5430 /// createAccessor<1>(grid): Caching of lower internal nodes only
5431 /// createAccessor<2>(grid): Caching of upper internal nodes only
5432 /// createAccessor<0,1>(grid): Caching of leaf and lower internal nodes
5433 /// createAccessor<0,2>(grid): Caching of leaf and upper internal nodes
5434 /// createAccessor<1,2>(grid): Caching of lower and upper internal nodes
5435 /// createAccessor<0,1,2>(grid): Caching of all nodes at all tree levels
5436 
5437 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5439 {
5441 }
5442 
5443 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5445 {
5447 }
5448 
5449 template<int LEVEL0 = -1, int LEVEL1 = -1, int LEVEL2 = -1, typename ValueT = float>
5451 {
5453 }
5454 
5455 //////////////////////////////////////////////////
5456 
5457 /// @brief This is a convenient class that allows for access to grid meta-data
5458 /// that are independent of the value type of a grid. That is, this class
5459 /// can be used to get information about a grid without actually knowing
5460 /// its ValueType.
5462 { // 768 bytes (32 byte aligned)
5463  GridData mGridData; // 672B
5464  TreeData mTreeData; // 64B
5465  CoordBBox mIndexBBox; // 24B. AABB of active values in index space.
5466  uint32_t mRootTableSize, mPadding{0}; // 8B
5467 
5468 public:
5469  template<typename T>
5471  {
5472  mGridData = *grid.data();
5473  mTreeData = *grid.tree().data();
5474  mIndexBBox = grid.indexBBox();
5475  mRootTableSize = grid.tree().root().getTableSize();
5476  }
5477  GridMetaData(const GridData* gridData)
5478  {
5479  if (GridMetaData::safeCast(gridData)) {
5480  *this = *reinterpret_cast<const GridMetaData*>(gridData);
5481  //util::memcpy(this, (const GridMetaData*)gridData);
5482  } else {// otherwise copy each member individually
5483  mGridData = *gridData;
5484  mTreeData = *reinterpret_cast<const TreeData*>(gridData->treePtr());
5485  mIndexBBox = gridData->indexBBox();
5486  mRootTableSize = gridData->rootTableSize();
5487  }
5488  }
5489  GridMetaData& operator=(const GridMetaData&) = default;
5490  /// @brief return true if the RootData follows right after the TreeData.
5491  /// If so, this implies that it's safe to cast the grid from which
5492  /// this instance was constructed to a GridMetaData
5493  __hostdev__ bool safeCast() const { return mTreeData.isRootNext(); }
5494 
5495  /// @brief return true if it is safe to cast the grid to a pointer
5496  /// of type GridMetaData, i.e. construction can be avoided.
5497  __hostdev__ static bool safeCast(const GridData *gridData){
5498  NANOVDB_ASSERT(gridData && gridData->isValid());
5499  return gridData->isRootConnected();
5500  }
5501  /// @brief return true if it is safe to cast the grid to a pointer
5502  /// of type GridMetaData, i.e. construction can be avoided.
5503  template<typename T>
5504  __hostdev__ static bool safeCast(const NanoGrid<T>& grid){return grid.tree().isRootNext();}
5505  __hostdev__ bool isValid() const { return mGridData.isValid(); }
5506  __hostdev__ const GridType& gridType() const { return mGridData.mGridType; }
5507  __hostdev__ const GridClass& gridClass() const { return mGridData.mGridClass; }
5508  __hostdev__ bool isLevelSet() const { return mGridData.mGridClass == GridClass::LevelSet; }
5509  __hostdev__ bool isFogVolume() const { return mGridData.mGridClass == GridClass::FogVolume; }
5510  __hostdev__ bool isStaggered() const { return mGridData.mGridClass == GridClass::Staggered; }
5511  __hostdev__ bool isPointIndex() const { return mGridData.mGridClass == GridClass::PointIndex; }
5512  __hostdev__ bool isGridIndex() const { return mGridData.mGridClass == GridClass::IndexGrid; }
5513  __hostdev__ bool isPointData() const { return mGridData.mGridClass == GridClass::PointData; }
5514  __hostdev__ bool isMask() const { return mGridData.mGridClass == GridClass::Topology; }
5515  __hostdev__ bool isUnknown() const { return mGridData.mGridClass == GridClass::Unknown; }
5516  __hostdev__ bool hasMinMax() const { return mGridData.mFlags.isMaskOn(GridFlags::HasMinMax); }
5517  __hostdev__ bool hasBBox() const { return mGridData.mFlags.isMaskOn(GridFlags::HasBBox); }
5518  __hostdev__ bool hasLongGridName() const { return mGridData.mFlags.isMaskOn(GridFlags::HasLongGridName); }
5519  __hostdev__ bool hasAverage() const { return mGridData.mFlags.isMaskOn(GridFlags::HasAverage); }
5520  __hostdev__ bool hasStdDeviation() const { return mGridData.mFlags.isMaskOn(GridFlags::HasStdDeviation); }
5521  __hostdev__ bool isBreadthFirst() const { return mGridData.mFlags.isMaskOn(GridFlags::IsBreadthFirst); }
5522  __hostdev__ uint64_t gridSize() const { return mGridData.mGridSize; }
5523  __hostdev__ uint32_t gridIndex() const { return mGridData.mGridIndex; }
5524  __hostdev__ uint32_t gridCount() const { return mGridData.mGridCount; }
5525  __hostdev__ const char* shortGridName() const { return mGridData.mGridName; }
5526  __hostdev__ const Map& map() const { return mGridData.mMap; }
5527  __hostdev__ const Vec3dBBox& worldBBox() const { return mGridData.mWorldBBox; }
5528  __hostdev__ const CoordBBox& indexBBox() const { return mIndexBBox; }
5529  __hostdev__ Vec3d voxelSize() const { return mGridData.mVoxelSize; }
5530  __hostdev__ uint32_t blindDataCount() const { return mGridData.mBlindMetadataCount; }
5531  __hostdev__ uint64_t activeVoxelCount() const { return mTreeData.mVoxelCount; }
5532  __hostdev__ const uint32_t& activeTileCount(uint32_t level) const { return mTreeData.mTileCount[level - 1]; }
5533  __hostdev__ uint32_t nodeCount(uint32_t level) const { return mTreeData.mNodeCount[level]; }
5534  __hostdev__ const Checksum& checksum() const { return mGridData.mChecksum; }
5535  __hostdev__ uint32_t rootTableSize() const { return mRootTableSize; }
5536  __hostdev__ bool isEmpty() const { return mRootTableSize == 0; }
5537  __hostdev__ Version version() const { return mGridData.mVersion; }
5538 }; // GridMetaData
5539 
5540 /// @brief Class to access points at a specific voxel location
5541 ///
5542 /// @note If GridClass::PointIndex AttT should be uint32_t and if GridClass::PointData Vec3f
5543 template<typename AttT, typename BuildT = uint32_t>
5544 class PointAccessor : public DefaultReadAccessor<BuildT>
5545 {
5546  using AccT = DefaultReadAccessor<BuildT>;
5547  const NanoGrid<BuildT>& mGrid;
5548  const AttT* mData;
5549 
5550 public:
5552  : AccT(grid.tree().root())
5553  , mGrid(grid)
5554  , mData(grid.template getBlindData<AttT>(0))
5555  {
5556  NANOVDB_ASSERT(grid.gridType() == toGridType<BuildT>());
5559  }
5560 
5561  /// @brief return true if this access was initialized correctly
5562  __hostdev__ operator bool() const { return mData != nullptr; }
5563 
5564  __hostdev__ const NanoGrid<BuildT>& grid() const { return mGrid; }
5565 
5566  /// @brief Return the total number of point in the grid and set the
5567  /// iterators to the complete range of points.
5568  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
5569  {
5570  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
5571  begin = mData;
5572  end = begin + count;
5573  return count;
5574  }
5575  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
5576  /// If this return value is larger than zero then the iterators @a begin and @a end
5577  /// will point to all the attributes contained within that leaf node.
5578  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5579  {
5580  auto* leaf = this->probeLeaf(ijk);
5581  if (leaf == nullptr) {
5582  return 0;
5583  }
5584  begin = mData + leaf->minimum();
5585  end = begin + leaf->maximum();
5586  return leaf->maximum();
5587  }
5588 
5589  /// @brief get iterators over attributes to points at a specific voxel location
5590  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5591  {
5592  begin = end = nullptr;
5593  if (auto* leaf = this->probeLeaf(ijk)) {
5594  const uint32_t offset = NanoLeaf<BuildT>::CoordToOffset(ijk);
5595  if (leaf->isActive(offset)) {
5596  begin = mData + leaf->minimum();
5597  end = begin + leaf->getValue(offset);
5598  if (offset > 0u)
5599  begin += leaf->getValue(offset - 1);
5600  }
5601  }
5602  return end - begin;
5603  }
5604 }; // PointAccessor
5605 
5606 template<typename AttT>
5607 class PointAccessor<AttT, Point> : public DefaultReadAccessor<Point>
5608 {
5609  using AccT = DefaultReadAccessor<Point>;
5610  const NanoGrid<Point>& mGrid;
5611  const AttT* mData;
5612 
5613 public:
5615  : AccT(grid.tree().root())
5616  , mGrid(grid)
5617  , mData(grid.template getBlindData<AttT>(0))
5618  {
5619  NANOVDB_ASSERT(mData);
5626  }
5627 
5628  /// @brief return true if this access was initialized correctly
5629  __hostdev__ operator bool() const { return mData != nullptr; }
5630 
5631  __hostdev__ const NanoGrid<Point>& grid() const { return mGrid; }
5632 
5633  /// @brief Return the total number of point in the grid and set the
5634  /// iterators to the complete range of points.
5635  __hostdev__ uint64_t gridPoints(const AttT*& begin, const AttT*& end) const
5636  {
5637  const uint64_t count = mGrid.blindMetaData(0u).mValueCount;
5638  begin = mData;
5639  end = begin + count;
5640  return count;
5641  }
5642  /// @brief Return the number of points in the leaf node containing the coordinate @a ijk.
5643  /// If this return value is larger than zero then the iterators @a begin and @a end
5644  /// will point to all the attributes contained within that leaf node.
5645  __hostdev__ uint64_t leafPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5646  {
5647  auto* leaf = this->probeLeaf(ijk);
5648  if (leaf == nullptr)
5649  return 0;
5650  begin = mData + leaf->offset();
5651  end = begin + leaf->pointCount();
5652  return leaf->pointCount();
5653  }
5654 
5655  /// @brief get iterators over attributes to points at a specific voxel location
5656  __hostdev__ uint64_t voxelPoints(const Coord& ijk, const AttT*& begin, const AttT*& end) const
5657  {
5658  if (auto* leaf = this->probeLeaf(ijk)) {
5659  const uint32_t n = NanoLeaf<Point>::CoordToOffset(ijk);
5660  if (leaf->isActive(n)) {
5661  begin = mData + leaf->first(n);
5662  end = mData + leaf->last(n);
5663  return end - begin;
5664  }
5665  }
5666  begin = end = nullptr;
5667  return 0u; // no leaf or inactive voxel
5668  }
5669 }; // PointAccessor<AttT, Point>
5670 
5671 /// @brief Class to access values in channels at a specific voxel location.
5672 ///
5673 /// @note The ChannelT template parameter can be either const and non-const.
5674 template<typename ChannelT, typename IndexT = ValueIndex>
5675 class ChannelAccessor : public DefaultReadAccessor<IndexT>
5676 {
5677  static_assert(BuildTraits<IndexT>::is_index, "Expected an index build type");
5679 
5680  const NanoGrid<IndexT>& mGrid;
5681  ChannelT* mChannel;
5682 
5683 public:
5684  using ValueType = ChannelT;
5687 
5688  /// @brief Ctor from an IndexGrid and an integer ID of an internal channel
5689  /// that is assumed to exist as blind data in the IndexGrid.
5690  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, uint32_t channelID = 0u)
5691  : BaseT(grid.tree().root())
5692  , mGrid(grid)
5693  , mChannel(nullptr)
5694  {
5695  NANOVDB_ASSERT(isIndex(grid.gridType()));
5697  this->setChannel(channelID);
5698  }
5699 
5700  /// @brief Ctor from an IndexGrid and an external channel
5701  __hostdev__ ChannelAccessor(const NanoGrid<IndexT>& grid, ChannelT* channelPtr)
5702  : BaseT(grid.tree().root())
5703  , mGrid(grid)
5704  , mChannel(channelPtr)
5705  {
5706  NANOVDB_ASSERT(isIndex(grid.gridType()));
5708  }
5709 
5710  /// @brief return true if this access was initialized correctly
5711  __hostdev__ operator bool() const { return mChannel != nullptr; }
5712 
5713  /// @brief Return a const reference to the IndexGrid
5714  __hostdev__ const NanoGrid<IndexT>& grid() const { return mGrid; }
5715 
5716  /// @brief Return a const reference to the tree of the IndexGrid
5717  __hostdev__ const TreeType& tree() const { return mGrid.tree(); }
5718 
5719  /// @brief Return a vector of the axial voxel sizes
5720  __hostdev__ const Vec3d& voxelSize() const { return mGrid.voxelSize(); }
5721 
5722  /// @brief Return total number of values indexed by the IndexGrid
5723  __hostdev__ const uint64_t& valueCount() const { return mGrid.valueCount(); }
5724 
5725  /// @brief Change to an external channel
5726  /// @return Pointer to channel data
5727  __hostdev__ ChannelT* setChannel(ChannelT* channelPtr) {return mChannel = channelPtr;}
5728 
5729  /// @brief Change to an internal channel, assuming it exists as as blind data
5730  /// in the IndexGrid.
5731  /// @return Pointer to channel data, which could be NULL if channelID is out of range or
5732  /// if ChannelT does not match the value type of the blind data
5733  __hostdev__ ChannelT* setChannel(uint32_t channelID)
5734  {
5735  return mChannel = const_cast<ChannelT*>(mGrid.template getBlindData<ChannelT>(channelID));
5736  }
5737 
5738  /// @brief Return the linear offset into a channel that maps to the specified coordinate
5739  __hostdev__ uint64_t getIndex(const math::Coord& ijk) const { return BaseT::getValue(ijk); }
5740  __hostdev__ uint64_t idx(int i, int j, int k) const { return BaseT::getValue(math::Coord(i, j, k)); }
5741 
5742  /// @brief Return the value from a cached channel that maps to the specified coordinate
5743  __hostdev__ ChannelT& getValue(const math::Coord& ijk) const { return mChannel[BaseT::getValue(ijk)]; }
5744  __hostdev__ ChannelT& operator()(const math::Coord& ijk) const { return this->getValue(ijk); }
5745  __hostdev__ ChannelT& operator()(int i, int j, int k) const { return this->getValue(math::Coord(i, j, k)); }
5746 
5747  /// @brief return the state and updates the value of the specified voxel
5748  __hostdev__ bool probeValue(const math::Coord& ijk, typename util::remove_const<ChannelT>::type& v) const
5749  {
5750  uint64_t idx;
5751  const bool isActive = BaseT::probeValue(ijk, idx);
5752  v = mChannel[idx];
5753  return isActive;
5754  }
5755  /// @brief Return the value from a specified channel that maps to the specified coordinate
5756  ///
5757  /// @note The template parameter can be either const or non-const
5758  template<typename T>
5759  __hostdev__ T& getValue(const math::Coord& ijk, T* channelPtr) const { return channelPtr[BaseT::getValue(ijk)]; }
5760 
5761 }; // ChannelAccessor
5762 
5763 #if 0
5764 // This MiniGridHandle class is only included as a stand-alone example. Note that aligned_alloc is a C++17 feature!
5765 // Normally we recommend using GridHandle defined in util/GridHandle.h but this minimal implementation could be an
5766 // alternative when using the IO methods defined below.
5767 struct MiniGridHandle {
5768  struct BufferType {
5769  uint8_t *data;
5770  uint64_t size;
5771  BufferType(uint64_t n=0) : data(std::aligned_alloc(NANOVDB_DATA_ALIGNMENT, n)), size(n) {assert(isValid(data));}
5772  BufferType(BufferType &&other) : data(other.data), size(other.size) {other.data=nullptr; other.size=0;}
5773  ~BufferType() {std::free(data);}
5774  BufferType& operator=(const BufferType &other) = delete;
5775  BufferType& operator=(BufferType &&other){data=other.data; size=other.size; other.data=nullptr; other.size=0; return *this;}
5776  static BufferType create(size_t n, BufferType* dummy = nullptr) {return BufferType(n);}
5777  } buffer;
5778  MiniGridHandle(BufferType &&buf) : buffer(std::move(buf)) {}
5779  const uint8_t* data() const {return buffer.data;}
5780 };// MiniGridHandle
5781 #endif
5782 
5783 namespace io {
5784 
5785 /// @brief Define compression codecs
5786 ///
5787 /// @note NONE is the default, ZIP is slow but compact and BLOSC offers a great balance.
5788 ///
5789 /// @throw NanoVDB optionally supports ZIP and BLOSC compression and will throw an exception
5790 /// if its support is required but missing.
5791 enum class Codec : uint16_t { NONE = 0,
5792  ZIP = 1,
5793  BLOSC = 2,
5794  End = 3,
5795  StrLen = 6 + End };
5796 
5797 __hostdev__ inline const char* toStr(char *dst, Codec codec)
5798 {
5799  switch (codec){
5800  case Codec::NONE: return util::strcpy(dst, "NONE");
5801  case Codec::ZIP: return util::strcpy(dst, "ZIP");
5802  case Codec::BLOSC : return util::strcpy(dst, "BLOSC");// StrLen = 5 + 1 + End
5803  default: return util::strcpy(dst, "END");
5804  }
5805 }
5806 
5807 __hostdev__ inline Codec toCodec(const char *str)
5808 {
5809  if (util::streq(str, "none")) return Codec::NONE;
5810  if (util::streq(str, "zip")) return Codec::ZIP;
5811  if (util::streq(str, "blosc")) return Codec::BLOSC;
5812  return Codec::End;
5813 }
5814 
5815 /// @brief Data encoded at the head of each segment of a file or stream.
5816 ///
5817 /// @note A file or stream is composed of one or more segments that each contain
5818 // one or more grids.
5819 struct FileHeader {// 16 bytes
5820  uint64_t magic;// 8 bytes
5821  Version version;// 4 bytes version numbers
5822  uint16_t gridCount;// 2 bytes
5823  Codec codec;// 2 bytes
5824  bool isValid() const {return magic == NANOVDB_MAGIC_NUMB || magic == NANOVDB_MAGIC_FILE;}
5825 }; // FileHeader ( 16 bytes = 2 words )
5826 
5827 // @brief Data encoded for each of the grids associated with a segment.
5828 // Grid size in memory (uint64_t) |
5829 // Grid size on disk (uint64_t) |
5830 // Grid name hash key (uint64_t) |
5831 // Numer of active voxels (uint64_t) |
5832 // Grid type (uint32_t) |
5833 // Grid class (uint32_t) |
5834 // Characters in grid name (uint32_t) |
5835 // AABB in world space (2*3*double) | one per grid in file
5836 // AABB in index space (2*3*int) |
5837 // Size of a voxel in world units (3*double) |
5838 // Byte size of the grid name (uint32_t) |
5839 // Number of nodes per level (4*uint32_t) |
5840 // Numer of active tiles per level (3*uint32_t) |
5841 // Codec for file compression (uint16_t) |
5842 // Padding due to 8B alignment (uint16_t) |
5843 // Version number (uint32_t) |
5845 {// 176 bytes
5846  uint64_t gridSize, fileSize, nameKey, voxelCount; // 4 * 8 = 32B.
5849  Vec3dBBox worldBBox; // 2 * 3 * 8 = 48B.
5850  CoordBBox indexBBox; // 2 * 3 * 4 = 24B.
5851  Vec3d voxelSize; // 24B.
5852  uint32_t nameSize; // 4B.
5853  uint32_t nodeCount[4]; //4 x 4 = 16B
5854  uint32_t tileCount[3];// 3 x 4 = 12B
5855  Codec codec; // 2B
5856  uint16_t blindDataCount;// 2B
5858 }; // FileMetaData
5859 
5860 // the following code block uses std and therefore needs to be ignored by CUDA and HIP
5861 #if !defined(__CUDA_ARCH__) && !defined(__HIP__)
5862 
5863 // Note that starting with version 32.6.0 it is possible to write and read raw grid buffers to
5864 // files, e.g. os.write((const char*)&buffer.data(), buffer.size()) or more conveniently as
5865 // handle.write(fileName). In addition to this simple approach we offer the methods below to
5866 // write traditional uncompressed nanovdb files that unlike raw files include metadata that
5867 // is used for tools like nanovdb_print.
5868 
5869 ///
5870 /// @brief This is a standalone alternative to io::writeGrid(...,Codec::NONE) defined in util/IO.h
5871 /// Unlike the latter this function has no dependencies at all, not even NanoVDB.h, so it also
5872 /// works if client code only includes PNanoVDB.h!
5873 ///
5874 /// @details Writes a raw NanoVDB buffer, possibly with multiple grids, to a stream WITHOUT compression.
5875 /// It follows all the conventions in util/IO.h so the stream can be read by all existing client
5876 /// code of NanoVDB.
5877 ///
5878 /// @note This method will always write uncompressed grids to the stream, i.e. Blosc or ZIP compression
5879 /// is never applied! This is a fundamental limitation and feature of this standalone function.
5880 ///
5881 /// @throw std::invalid_argument if buffer does not point to a valid NanoVDB grid.
5882 ///
5883 /// @warning This is pretty ugly code that involves lots of pointer and bit manipulations - not for the faint of heart :)
5884 template<typename StreamT> // StreamT class must support: "void write(const char*, size_t)"
5885 void writeUncompressedGrid(StreamT& os, const GridData* gridData, bool raw = false)
5886 {
5887  NANOVDB_ASSERT(gridData->mMagic == NANOVDB_MAGIC_NUMB || gridData->mMagic == NANOVDB_MAGIC_GRID);
5888  NANOVDB_ASSERT(gridData->mVersion.isCompatible());
5889  if (!raw) {// segment with a single grid: FileHeader, FileMetaData, gridName, Grid
5890 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
5891  FileHeader head{NANOVDB_MAGIC_FILE, gridData->mVersion, 1u, Codec::NONE};
5892 #else
5893  FileHeader head{NANOVDB_MAGIC_NUMB, gridData->mVersion, 1u, Codec::NONE};
5894 #endif
5895  const char* gridName = gridData->gridName();
5896  const uint32_t nameSize = util::strlen(gridName) + 1;// include '\0'
5897  const TreeData* treeData = (const TreeData*)(gridData->treePtr());
5898  NANOVDB_ASSERT(gridData->mBlindMetadataCount <= uint32_t( 1u << 16 ));// due to uint32_t -> uin16_t conversion
5899  FileMetaData meta{gridData->mGridSize, gridData->mGridSize, 0u, treeData->mVoxelCount,
5900  gridData->mGridType, gridData->mGridClass, gridData->mWorldBBox,
5901  treeData->bbox(), gridData->mVoxelSize, nameSize,
5902  {treeData->mNodeCount[0], treeData->mNodeCount[1], treeData->mNodeCount[2], 1u},
5903  {treeData->mTileCount[0], treeData->mTileCount[1], treeData->mTileCount[2]},
5904  Codec::NONE, uint16_t(gridData->mBlindMetadataCount), gridData->mVersion }; // FileMetaData
5905  os.write((const char*)&head, sizeof(FileHeader)); // write header
5906  os.write((const char*)&meta, sizeof(FileMetaData)); // write meta data
5907  os.write(gridName, nameSize); // write grid name
5908  }
5909  if (gridData->mGridCount!=1 || gridData->mGridIndex != 0) {
5910  GridData data;
5911  data = *gridData;// deep copy
5912  data.mGridIndex = 0;
5913  data.mGridCount = 1;
5914  os.write((const char*)&data, sizeof(GridData));
5915  os.write((const char*)gridData + sizeof(GridData), gridData->mGridSize - sizeof(GridData));
5916  } else {
5917  os.write((const char*)gridData, gridData->mGridSize);// write the grid
5918  }
5919 }// writeUncompressedGrid
5920 
5921 /// @brief Write an IndexGrid to a stream and append blind data
5922 /// @tparam StreamT Type of stream to write the IndexGrid and blind data to
5923 /// @tparam ValueT Type of the blind data
5924 /// @param os Output stream to write to
5925 /// @param gridData GridData containing an IndexGrid WITHOUT existing blind data
5926 /// @param blindData Raw point to array of blind data
5927 /// @param semantic GridBlindDataSemantic of the blind data
5928 /// @param raw If true the IndexGrid and blind data are streamed raw, i.e. without a file header.
5929 template<typename StreamT, typename ValueT> // StreamT class must support: "void write(const char*, size_t)"
5930 void writeUncompressedGrid(StreamT& os, const GridData* gridData, const ValueT *blindData,
5931  GridBlindDataSemantic semantic = GridBlindDataSemantic::Unknown, bool raw = false)
5932 {
5933  NANOVDB_ASSERT(gridData->mMagic == NANOVDB_MAGIC_NUMB || gridData->mMagic == NANOVDB_MAGIC_GRID);
5934  NANOVDB_ASSERT(gridData->mVersion.isCompatible());
5935  NANOVDB_ASSERT(blindData);
5936 
5937  char str[256];
5938  if (gridData->mGridClass != GridClass::IndexGrid) {
5939  fprintf(stderr, "nanovdb::writeUncompressedGrid: expected an IndexGrid but got \"%s\"\n", toStr(str, gridData->mGridClass));
5940  exit(EXIT_FAILURE);
5941  } else if (gridData->mBlindMetadataCount != 0u) {// to-do: allow for existing blind data in grid
5942  fprintf(stderr, "nanovdb::writeUncompressedGrid: index grid already has \"%i\" blind data\n", gridData->mBlindMetadataCount);
5943  exit(EXIT_FAILURE);
5944  }
5945  const size_t gridSize = gridData->mGridSize + sizeof(GridBlindMetaData) + gridData->mData1*sizeof(ValueT);
5946  if (!raw) {// segment with a single grid: FileHeader, FileMetaData, gridName, Grid
5947 #ifdef NANOVDB_USE_NEW_MAGIC_NUMBERS
5948  FileHeader head{NANOVDB_MAGIC_FILE, gridData->mVersion, 1u/*grid count*/, Codec::NONE};
5949 #else
5950  FileHeader head{NANOVDB_MAGIC_NUMB, gridData->mVersion, 1u/*grid count*/, Codec::NONE};
5951 #endif
5952  const char* gridName = gridData->gridName();
5953  const uint32_t nameSize = util::strlen(gridName) + 1;// include '\0'
5954  const TreeData* treeData = (const TreeData*)(gridData->treePtr());
5955  FileMetaData meta{gridSize, gridSize, 0u, treeData->mVoxelCount,
5956  gridData->mGridType, gridData->mGridClass, gridData->mWorldBBox,
5957  treeData->bbox(), gridData->mVoxelSize, nameSize,
5958  {treeData->mNodeCount[0], treeData->mNodeCount[1], treeData->mNodeCount[2], 1u},
5959  {treeData->mTileCount[0], treeData->mTileCount[1], treeData->mTileCount[2]},
5960  Codec::NONE, 1u, gridData->mVersion }; // FileMetaData
5961  os.write((const char*)&head, sizeof(FileHeader)); // write header
5962  os.write((const char*)&meta, sizeof(FileMetaData)); // write meta data
5963  os.write(gridName, nameSize); // write grid name
5964  }// if (!raw)
5965  GridData data;
5966  data = *gridData;// deep copy
5967  data.mGridIndex = 0;
5968  data.mGridCount = 1;
5969  data.mGridSize = gridSize;// increment by blind data + meta data
5970  data.mBlindMetadataCount = 1u;
5971  data.mBlindMetadataOffset = gridData->mGridSize;
5972  os.write((const char*)&data, sizeof(GridData));
5973  os.write((const char*)gridData + sizeof(GridData), gridData->mGridSize - sizeof(GridData));// write the IndexGrid
5974  GridBlindMetaData meta(sizeof(GridBlindMetaData), gridData->mData1, sizeof(ValueT),
5975  semantic, GridBlindDataClass::ChannelArray, toGridType<ValueT>());
5976  meta.setName("channel_0");
5977  os.write((const char*)&meta, sizeof(GridBlindMetaData));
5978  os.write((const char*)blindData, gridData->mData1*sizeof(ValueT));
5979 }// writeUncompressedGrid
5980 
5981 /// @brief write multiple NanoVDB grids to a single file, without compression.
5982 /// @note To write all grids in a single GridHandle simply use handle.write("fieNane")
5983 template<typename GridHandleT, template<typename...> class VecT>
5984 void writeUncompressedGrids(const char* fileName, const VecT<GridHandleT>& handles, bool raw = false)
5985 {
5986 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ofstream or FILE implementations
5987  std::ofstream os(fileName, std::ios::out | std::ios::binary | std::ios::trunc);
5988 #else
5989  struct StreamT {
5990  FILE* fptr;
5991  StreamT(const char* name) { fptr = fopen(name, "wb"); }
5992  ~StreamT() { fclose(fptr); }
5993  void write(const char* data, size_t n) { fwrite(data, 1, n, fptr); }
5994  bool is_open() const { return fptr != NULL; }
5995  } os(fileName);
5996 #endif
5997  if (!os.is_open()) {
5998  fprintf(stderr, "nanovdb::writeUncompressedGrids: Unable to open file \"%s\"for output\n", fileName);
5999  exit(EXIT_FAILURE);
6000  }
6001  for (auto& h : handles) {
6002  for (uint32_t n=0; n<h.gridCount(); ++n) writeUncompressedGrid(os, h.gridData(n), raw);
6003  }
6004 } // writeUncompressedGrids
6005 
6006 /// @brief read all uncompressed grids from a stream and return their handles.
6007 ///
6008 /// @throw std::invalid_argument if stream does not contain a single uncompressed valid NanoVDB grid
6009 ///
6010 /// @details StreamT class must support: "bool read(char*, size_t)" and "void skip(uint32_t)"
6011 template<typename GridHandleT, typename StreamT, template<typename...> class VecT>
6012 VecT<GridHandleT> readUncompressedGrids(StreamT& is, const typename GridHandleT::BufferType& pool = typename GridHandleT::BufferType())
6013 {
6014  VecT<GridHandleT> handles;
6015  GridData data;
6016  is.read((char*)&data, sizeof(GridData));
6017  if (data.isValid()) {// stream contains a raw grid buffer
6018  uint64_t size = data.mGridSize, sum = 0u;
6019  while(data.mGridIndex + 1u < data.mGridCount) {
6020  is.skip(data.mGridSize - sizeof(GridData));// skip grid
6021  is.read((char*)&data, sizeof(GridData));// read sizeof(GridData) bytes
6022  sum += data.mGridSize;
6023  }
6024  is.skip(-int64_t(sum + sizeof(GridData)));// rewind to start
6025  auto buffer = GridHandleT::BufferType::create(size + sum, &pool);
6026  is.read((char*)(buffer.data()), buffer.size());
6027  handles.emplace_back(std::move(buffer));
6028  } else {// Header0, MetaData0, gridName0, Grid0...HeaderN, MetaDataN, gridNameN, GridN
6029  is.skip(-sizeof(GridData));// rewind
6030  FileHeader head;
6031  while(is.read((char*)&head, sizeof(FileHeader))) {
6032  if (!head.isValid()) {
6033  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid magic number = \"%s\"\n", (const char*)&(head.magic));
6034  exit(EXIT_FAILURE);
6035  } else if (!head.version.isCompatible()) {
6036  char str[20];
6037  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid major version = \"%s\"\n", toStr(str, head.version));
6038  exit(EXIT_FAILURE);
6039  } else if (head.codec != Codec::NONE) {
6040  char str[8];
6041  fprintf(stderr, "nanovdb::readUncompressedGrids: invalid codec = \"%s\"\n", toStr(str, head.codec));
6042  exit(EXIT_FAILURE);
6043  }
6044  FileMetaData meta;
6045  for (uint16_t i = 0; i < head.gridCount; ++i) { // read all grids in segment
6046  is.read((char*)&meta, sizeof(FileMetaData));// read meta data
6047  is.skip(meta.nameSize); // skip grid name
6048  auto buffer = GridHandleT::BufferType::create(meta.gridSize, &pool);
6049  is.read((char*)buffer.data(), meta.gridSize);// read grid
6050  handles.emplace_back(std::move(buffer));
6051  }// loop over grids in segment
6052  }// loop over segments
6053  }
6054  return handles;
6055 } // readUncompressedGrids
6056 
6057 /// @brief Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
6058 template<typename GridHandleT, template<typename...> class VecT>
6059 VecT<GridHandleT> readUncompressedGrids(const char* fileName, const typename GridHandleT::BufferType& buffer = typename GridHandleT::BufferType())
6060 {
6061 #ifdef NANOVDB_USE_IOSTREAMS // use this to switch between std::ifstream or FILE implementations
6062  struct StreamT : public std::ifstream {
6063  StreamT(const char* name) : std::ifstream(name, std::ios::in | std::ios::binary){}
6064  void skip(int64_t off) { this->seekg(off, std::ios_base::cur); }
6065  };
6066 #else
6067  struct StreamT {
6068  FILE* fptr;
6069  StreamT(const char* name) { fptr = fopen(name, "rb"); }
6070  ~StreamT() { fclose(fptr); }
6071  bool read(char* data, size_t n) {
6072  size_t m = fread(data, 1, n, fptr);
6073  return n == m;
6074  }
6075  void skip(int64_t off) { fseek(fptr, (long int)off, SEEK_CUR); }
6076  bool is_open() const { return fptr != NULL; }
6077  };
6078 #endif
6079  StreamT is(fileName);
6080  if (!is.is_open()) {
6081  fprintf(stderr, "nanovdb::readUncompressedGrids: Unable to open file \"%s\"for input\n", fileName);
6082  exit(EXIT_FAILURE);
6083  }
6084  return readUncompressedGrids<GridHandleT, StreamT, VecT>(is, buffer);
6085 } // readUncompressedGrids
6086 
6087 #endif // if !defined(__CUDA_ARCH__) && !defined(__HIP__)
6088 
6089 } // namespace io
6090 
6091 // ----------------------------> Implementations of random access methods <--------------------------------------
6092 
6093 /**
6094 * @brief Below is an example of a struct used for random get methods.
6095 * @note All member methods, data, and types are mandatory.
6096 * @code
6097  template<typename BuildT>
6098  struct GetOpT {
6099  using Type = typename BuildToValueMap<BuildT>::Type;// return type
6100  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6101  __hostdev__ static Type get(const NanoRoot<BuildT>& root, args...) { }
6102  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile, args...) { }
6103  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n, args...) { }
6104  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n, args...) { }
6105  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n, args...) { }
6106  };
6107  @endcode
6108 
6109  * @brief Below is an example of the struct used for random set methods
6110  * @note All member methods and data are mandatory.
6111  * @code
6112  template<typename BuildT>
6113  struct SetOpT {
6114  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6115  __hostdev__ static void set(NanoRoot<BuildT>& root, args...) { }
6116  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile& tile, args...) { }
6117  __hostdev__ static void set(NanoUpper<BuildT>& node, uint32_t n, args...) { }
6118  __hostdev__ static void set(NanoLower<BuildT>& node, uint32_t n, args...) { }
6119  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, args...) { }
6120  };
6121  @endcode
6122 **/
6123 
6124 /// @brief Implements Tree::getValue(math::Coord), i.e. return the value associated with a specific coordinate @c ijk.
6125 /// @tparam BuildT Build type of the grid being called
6126 /// @details The value at a coordinate either maps to the background, a tile value or a leaf value.
6127 template<typename BuildT>
6128 struct GetValue
6129 {
6131  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6132  __hostdev__ static Type get(const NanoRoot<BuildT>& root) { return root.mBackground; }
6133  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.value; }
6134  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6135  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n) { return node.mTable[n].value; }
6136  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.getValue(n); } // works with all build types
6137 }; // GetValue<BuildT>
6138 
6139 template<typename BuildT>
6140 struct SetValue
6141 {
6142  static_assert(!BuildTraits<BuildT>::is_special, "SetValue does not support special value types, e.g. Fp4, Fp8, Fp16, FpN");
6144  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6145  __hostdev__ static void set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6146  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile& tile, const ValueT& v) { tile.value = v; }
6147  __hostdev__ static void set(NanoUpper<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6148  __hostdev__ static void set(NanoLower<BuildT>& node, uint32_t n, const ValueT& v) { node.mTable[n].value = v; }
6149  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6150 }; // SetValue<BuildT>
6151 
6152 template<typename BuildT>
6153 struct SetVoxel
6154 {
6155  static_assert(!BuildTraits<BuildT>::is_special, "SetVoxel does not support special value types. e.g. Fp4, Fp8, Fp16, FpN");
6157  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6158  __hostdev__ static void set(NanoRoot<BuildT>&, const ValueT&) {} // no-op
6159  __hostdev__ static void set(typename NanoRoot<BuildT>::Tile&, const ValueT&) {} // no-op
6160  __hostdev__ static void set(NanoUpper<BuildT>&, uint32_t, const ValueT&) {} // no-op
6161  __hostdev__ static void set(NanoLower<BuildT>&, uint32_t, const ValueT&) {} // no-op
6162  __hostdev__ static void set(NanoLeaf<BuildT>& leaf, uint32_t n, const ValueT& v) { leaf.mValues[n] = v; }
6163 }; // SetVoxel<BuildT>
6164 
6165 /// @brief Implements Tree::isActive(math::Coord)
6166 /// @tparam BuildT Build type of the grid being called
6167 template<typename BuildT>
6168 struct GetState
6169 {
6170  using Type = bool;
6171  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6172  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return false; }
6173  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile) { return tile.state > 0; }
6174  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6175  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n) { return node.mValueMask.isOn(n); }
6176  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n) { return leaf.mValueMask.isOn(n); }
6177 }; // GetState<BuildT>
6178 
6179 /// @brief Implements Tree::getDim(math::Coord)
6180 /// @tparam BuildT Build type of the grid being called
6181 template<typename BuildT>
6182 struct GetDim
6183 {
6184  using Type = uint32_t;
6185  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6186  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return 0u; } // background
6187  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return 4096u; }
6188  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return 128u; }
6189  __hostdev__ static Type get(const NanoLower<BuildT>&, uint32_t) { return 8u; }
6190  __hostdev__ static Type get(const NanoLeaf<BuildT>&, uint32_t) { return 1u; }
6191 }; // GetDim<BuildT>
6192 
6193 /// @brief Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord)
6194 /// @tparam BuildT Build type of the grid being called
6195 template<typename BuildT>
6196 struct GetLeaf
6197 {
6198  using Type = const NanoLeaf<BuildT>*;
6199  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6200  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6201  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6202  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6203  __hostdev__ static Type get(const NanoLower<BuildT>&, uint32_t) { return nullptr; }
6204  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t) { return &leaf; }
6205 }; // GetLeaf<BuildT>
6206 
6207 /// @brief Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates
6208 /// @tparam BuildT Build type of the grid being called
6209 template<typename BuildT>
6210 struct GetLower
6211 {
6212  using Type = const NanoLower<BuildT>*;
6213  static constexpr int LEVEL = 1;// minimum level for the descent during top-down traversal
6214  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6215  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6216  __hostdev__ static Type get(const NanoUpper<BuildT>&, uint32_t) { return nullptr; }
6217  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t) { return &node; }
6218 }; // GetLower<BuildT>
6219 
6220 /// @brief Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates
6221 /// @tparam BuildT Build type of the grid being called
6222 template<typename BuildT>
6223 struct GetUpper
6224 {
6225  using Type = const NanoUpper<BuildT>*;
6226  static constexpr int LEVEL = 2;// minimum level for the descent during top-down traversal
6227  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6228  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile&) { return nullptr; }
6229  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t) { return &node; }
6230 }; // GetUpper<BuildT>
6231 
6232 /// @brief Return point to the root Tile where math::Coord maps to one of its values, i.e. terminates
6233 /// @tparam BuildT Build type of the grid being called
6234 template<typename BuildT>
6235 struct GetTile
6236 {
6237  using Type = const typename NanoRoot<BuildT>::Tile*;
6238  static constexpr int LEVEL = 3;// minimum level for the descent during top-down traversal
6239  __hostdev__ static Type get(const NanoRoot<BuildT>&) { return nullptr; }
6240  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile &tile) { return &tile; }
6241 }; // GetTile<BuildT>
6242 
6243 /// @brief Implements Tree::probeLeaf(math::Coord)
6244 /// @tparam BuildT Build type of the grid being called
6245 template<typename BuildT>
6246 struct ProbeValue
6247 {
6248  using Type = bool;
6249  static constexpr int LEVEL = 0;// minimum level for the descent during top-down traversal
6251  __hostdev__ static Type get(const NanoRoot<BuildT>& root, ValueT& v)
6252  {
6253  v = root.mBackground;
6254  return false;
6255  }
6256  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile, ValueT& v)
6257  {
6258  v = tile.value;
6259  return tile.state > 0u;
6260  }
6261  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n, ValueT& v)
6262  {
6263  v = node.mTable[n].value;
6264  return node.mValueMask.isOn(n);
6265  }
6266  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n, ValueT& v)
6267  {
6268  v = node.mTable[n].value;
6269  return node.mValueMask.isOn(n);
6270  }
6271  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n, ValueT& v)
6272  {
6273  v = leaf.getValue(n);
6274  return leaf.mValueMask.isOn(n);
6275  }
6276 }; // ProbeValue<BuildT>
6277 
6278 /// @brief Implements Tree::getNodeInfo(math::Coord)
6279 /// @tparam BuildT Build type of the grid being called
6280 template<typename BuildT>
6281 struct GetNodeInfo
6282 {
6285  struct NodeInfo
6286  {
6287  uint32_t level, dim;
6288  ValueType minimum, maximum;
6289  FloatType average, stdDevi;
6290  CoordBBox bbox;
6291  };
6292  static constexpr int LEVEL = 0;
6293  using Type = NodeInfo;
6294  __hostdev__ static Type get(const NanoRoot<BuildT>& root)
6295  {
6296  return NodeInfo{3u, NanoUpper<BuildT>::DIM, root.minimum(), root.maximum(), root.average(), root.stdDeviation(), root.bbox()};
6297  }
6298  __hostdev__ static Type get(const typename NanoRoot<BuildT>::Tile& tile)
6299  {
6300  return NodeInfo{3u, NanoUpper<BuildT>::DIM, tile.value, tile.value, static_cast<FloatType>(tile.value), 0, CoordBBox::createCube(tile.origin(), NanoUpper<BuildT>::DIM)};
6301  }
6302  __hostdev__ static Type get(const NanoUpper<BuildT>& node, uint32_t n)
6303  {
6304  return NodeInfo{2u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6305  }
6306  __hostdev__ static Type get(const NanoLower<BuildT>& node, uint32_t n)
6307  {
6308  return NodeInfo{1u, node.dim(), node.minimum(), node.maximum(), node.average(), node.stdDeviation(), node.bbox()};
6309  }
6310  __hostdev__ static Type get(const NanoLeaf<BuildT>& leaf, uint32_t n)
6311  {
6312  return NodeInfo{0u, leaf.dim(), leaf.minimum(), leaf.maximum(), leaf.average(), leaf.stdDeviation(), leaf.bbox()};
6313  }
6314 }; // GetNodeInfo<BuildT>
6315 
6316 } // namespace nanovdb ===================================================================
6317 
6318 #endif // end of NANOVDB_NANOVDB_H_HAS_BEEN_INCLUDED
typename FloatTraits< BuildT >::FloatType FloatType
Definition: NanoVDB.h:3653
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:3688
__hostdev__ ValueOffIterator beginValueOff() const
Definition: NanoVDB.h:4305
__hostdev__ DenseIter()
Definition: NanoVDB.h:2985
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:2255
__hostdev__ bool probeValue(const math::Coord &ijk, typename util::remove_const< ChannelT >::type &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:5748
__hostdev__ ValueT value() const
Definition: NanoVDB.h:2748
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3808
typename BuildT::RootType RootType
Definition: NanoVDB.h:2130
__hostdev__ const Vec3d & voxelSize() const
Return a const reference to the size of a voxel in world units.
Definition: NanoVDB.h:2190
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5338
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1094
ValueT ValueType
Definition: NanoVDB.h:4907
__hostdev__ uint64_t full() const
Definition: NanoVDB.h:1854
__hostdev__ const char * shortGridName() const
Return a c-string with the name of this grid, truncated to 255 characters.
Definition: NanoVDB.h:2289
__hostdev__ util::enable_if<!util::is_same< MaskT, Mask >::value, Mask & >::type operator=(const MaskT &other)
Assignment operator that works with openvdb::util::NodeMask.
Definition: NanoVDB.h:1191
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:3040
bool type
Definition: NanoVDB.h:528
Visits all tile values in this node, i.e. both inactive and active tiles.
Definition: NanoVDB.h:3341
__hostdev__ math::BBox< CoordT > bbox() const
Return the bounding box in index space of active values in this leaf node.
Definition: NanoVDB.h:4418
math::Extrema extrema(const IterT &iter, bool threaded=true)
Iterate over a scalar grid and compute extrema (min/max) of the values of the voxels that are visited...
Definition: Statistics.h:354
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4332
uint16_t ArrayType
Definition: NanoVDB.h:4159
__hostdev__ CheckMode toCheckMode(const Checksum &checksum)
Maps 64 bit checksum to CheckMode enum.
Definition: NanoVDB.h:1893
C++11 implementation of std::enable_if.
Definition: Util.h:341
FloatType mStdDevi
Definition: NanoVDB.h:3665
float type
Definition: NanoVDB.h:535
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3904
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5344
__hostdev__ CoordT offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:4409
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4028
__hostdev__ bool isEmpty() const
Definition: NanoVDB.h:5536
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4849
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:4374
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:5513
typename util::match_const< DataType, RootT >::type DataT
Definition: NanoVDB.h:2860
void writeUncompressedGrids(const char *fileName, const VecT< GridHandleT > &handles, bool raw=false)
write multiple NanoVDB grids to a single file, without compression.
Definition: NanoVDB.h:5984
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2437
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:3065
Definition: NanoVDB.h:5844
__hostdev__ Vec3d getVoxelSize() const
Return a voxels size in each coordinate direction, measured at the origin.
Definition: NanoVDB.h:1530
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4820
StatsT mStdDevi
Definition: NanoVDB.h:3185
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:2269
__hostdev__ const Vec3dBBox & worldBBox() const
Definition: NanoVDB.h:5527
__hostdev__ Vec3T applyMap(const Vec3T &xyz) const
Definition: NanoVDB.h:2005
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findFirst() const
Definition: NanoVDB.h:1352
__hostdev__ TileT * tile() const
Definition: NanoVDB.h:2869
__hostdev__ bool isOff(uint32_t n) const
Return true if the given bit is NOT set.
Definition: NanoVDB.h:1220
DataType::template TileIter< DataT > mTileIter
Definition: NanoVDB.h:2862
__hostdev__ Vec3T applyMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:2016
__hostdev__ const char * gridName() const
Definition: NanoVDB.h:2073
__hostdev__ ChannelT * setChannel(ChannelT *channelPtr)
Change to an external channel.
Definition: NanoVDB.h:5727
GridBlindDataClass mDataClass
Definition: NanoVDB.h:1575
typename util::match_const< Tile, RootT >::type TileT
Definition: NanoVDB.h:2861
__hostdev__ ChildT * getChild(uint32_t n)
Returns a pointer to the child node at the specifed linear offset.
Definition: NanoVDB.h:3213
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5340
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:1527
VDB Tree, which is a thin wrapper around a RootNode.
Definition: NanoVDB.h:2424
__hostdev__ Vec3T applyMapF(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1458
decltype(mFlags) Type
Definition: NanoVDB.h:945
OutGridT const XformOp bool bool
Definition: ValueTransformer.h:609
__hostdev__ Vec3T indexToWorld(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2201
math::BBox< CoordType > BBoxType
Definition: NanoVDB.h:2849
__hostdev__ Tile * tile(uint32_t n)
Definition: NanoVDB.h:2681
__hostdev__ DenseIter operator++(int)
Definition: NanoVDB.h:2999
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2923
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5341
__hostdev__ GridClass mapToGridClass(GridClass defaultClass=GridClass::Unknown)
Definition: NanoVDB.h:908
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2663
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:2469
__hostdev__ ValueIterator()
Definition: NanoVDB.h:4315
float Type
Definition: NanoVDB.h:555
float FloatType
Definition: NanoVDB.h:3733
__hostdev__ CoordT origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:4394
Highest level of the data structure. Contains a tree and a world->index transform (that currently onl...
Definition: NanoVDB.h:2126
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4826
__hostdev__ ValueOnIter(RootT *parent)
Definition: NanoVDB.h:2952
Vec3dBBox mWorldBBox
Definition: NanoVDB.h:1933
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3326
__hostdev__ const NodeTrait< RootT, 1 >::type * getFirstLower() const
Definition: NanoVDB.h:2563
__hostdev__ Vec3T applyIJTF(const Vec3T &xyz) const
Definition: NanoVDB.h:2024
FloatType stdDevi
Definition: NanoVDB.h:6289
__hostdev__ char * toStr(char *dst, GridType gridType)
Maps a GridType to a c-string.
Definition: NanoVDB.h:248
__hostdev__ ValueType maximum() const
Return a const reference to the maximum active value encoded in this leaf node.
Definition: NanoVDB.h:4380
__hostdev__ DenseIterator(const InternalNode *parent)
Definition: NanoVDB.h:3425
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:3024
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this internal node.
Definition: NanoVDB.h:3475
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5136
#define NANOVDB_PATCH_VERSION_NUMBER
Definition: NanoVDB.h:148
__hostdev__ void init(std::initializer_list< GridFlags > list={GridFlags::IsBreadthFirst}, uint64_t gridSize=0u, const Map &map=Map(), GridType gridType=GridType::Unknown, GridClass gridClass=GridClass::Unknown)
Definition: NanoVDB.h:1945
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4958
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3807
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:4035
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2733
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &v)
Sets the value at the specified location but leaves its state unchanged.
Definition: NanoVDB.h:4464
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1484
__hostdev__ ValueOnIter()
Definition: NanoVDB.h:2951
Class to access values in channels at a specific voxel location.
Definition: NanoVDB.h:5675
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3686
Definition: NanoVDB.h:2688
static __hostdev__ uint32_t padding()
Definition: NanoVDB.h:4434
typename GridT::TreeType Type
Definition: NanoVDB.h:2407
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:2889
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5339
char mGridName[MaxNameSize]
Definition: NanoVDB.h:1931
__hostdev__ bool operator>(const Version &rhs) const
Definition: NanoVDB.h:733
static __hostdev__ size_t memUsage(uint32_t bitWidth)
Definition: NanoVDB.h:3912
__hostdev__ void setChild(const CoordType &k, const void *ptr, const RootData *data)
Definition: NanoVDB.h:2649
__hostdev__ Version version() const
Definition: NanoVDB.h:2148
PointAccessor(const NanoGrid< Point > &grid)
Definition: NanoVDB.h:5614
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:2814
const GridBlindMetaData & operator=(const GridBlindMetaData &rhs)
Copy assignment operator that resets mDataOffset and copies mName.
Definition: NanoVDB.h:1618
__hostdev__ ValueType getValue(uint32_t i) const
Definition: NanoVDB.h:3679
__hostdev__ Map(double s, const Vec3d &t=Vec3d(0.0, 0.0, 0.0))
Definition: NanoVDB.h:1418
__hostdev__ ChildNodeType * probeChild(const CoordType &ijk)
Definition: NanoVDB.h:3524
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:3280
__hostdev__ void setLongGridNameOn(bool on=true)
Definition: NanoVDB.h:1994
__hostdev__ Mask(const Mask &other)
Copy constructor.
Definition: NanoVDB.h:1164
static __hostdev__ uint32_t CoordToOffset(const CoordT &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:4492
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4132
__hostdev__ const BlindDataT * getBlindData(uint32_t n) const
Definition: NanoVDB.h:2319
#define NANOVDB_MAGIC_NUMB
Definition: NanoVDB.h:139
__hostdev__ void setWord(WordT w, uint32_t n)
Definition: NanoVDB.h:1182
GridClass
Classes (superset of OpenVDB) that are currently supported by NanoVDB.
Definition: NanoVDB.h:283
typename DataType::ValueT ValueType
Definition: NanoVDB.h:2844
uint64_t magic
Definition: NanoVDB.h:5820
__hostdev__ bool isPartial() const
return true if the 64 bit checksum is partial, i.e. of head only
Definition: NanoVDB.h:1863
static T scalar(const T &s)
Definition: NanoVDB.h:768
Type Pow2(Type x)
Return x2.
Definition: Math.h:573
typename RootT::BuildType BuildType
Definition: NanoVDB.h:2439
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4045
Definition: NanoVDB.h:2911
__hostdev__ void * treePtr()
Definition: NanoVDB.h:2027
uint32_t state
Definition: NanoVDB.h:2669
BuildT BuildType
Definition: NanoVDB.h:3652
__hostdev__ void setDev(const FloatType &v)
Definition: NanoVDB.h:3703
__hostdev__ ConstTileIterator cbeginTile() const
Definition: NanoVDB.h:2759
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2436
Return the pointer to the leaf node that contains math::Coord. Implements Tree::probeLeaf(math::Coord...
Definition: NanoVDB.h:1782
__hostdev__ bool getDev() const
Definition: NanoVDB.h:4039
__hostdev__ bool isValid(GridType gridType, GridClass gridClass)
return true if the combination of GridType and GridClass is valid.
Definition: NanoVDB.h:643
static __hostdev__ bool isAligned(const void *p)
return true if the specified pointer is 32 byte aligned
Definition: NanoVDB.h:579
__hostdev__ void * getRoot()
Get a non-const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2383
__hostdev__ const CoordBBox & indexBBox() const
Definition: NanoVDB.h:5528
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:3337
uint8_t mFlags
Definition: NanoVDB.h:3659
__hostdev__ TileT * operator->() const
Definition: NanoVDB.h:2718
__hostdev__ LeafNodeType * getFirstLeaf()
Template specializations of getFirstNode.
Definition: NanoVDB.h:2560
uint64_t mOffset
Definition: NanoVDB.h:4167
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3709
__hostdev__ ValueIter(RootT *parent)
Definition: NanoVDB.h:2918
__hostdev__ GridBlindDataSemantic toSemantic(GridClass gridClass, GridBlindDataSemantic defaultSemantic=GridBlindDataSemantic::Unknown)
Maps from GridClass to GridBlindDataSemantic.
Definition: NanoVDB.h:463
static int64_t PtrDiff(const void *p, const void *q)
Compute the distance, in bytes, between two pointers, dist = p - q.
Definition: Util.h:498
__hostdev__ uint32_t gridIndex() const
Return index of this grid in the buffer.
Definition: NanoVDB.h:2161
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:2463
__hostdev__ bool isEmpty() const
Return true if the root is empty, i.e. has not child nodes or constant tiles.
Definition: NanoVDB.h:2392
Definition: NanoVDB.h:2116
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:2816
LeafNodeType Node0
Definition: NanoVDB.h:2446
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3064
Checksum mChecksum
Definition: NanoVDB.h:1925
Return point to the upper internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6223
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:2256
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:5578
typename DataType::StatsT FloatType
Definition: NanoVDB.h:2845
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this root node and any of its child nodes...
Definition: NanoVDB.h:3049
Below is an example of a struct used for random get methods.
Definition: NanoVDB.h:1772
BitFlags()
Definition: NanoVDB.h:946
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:3690
__hostdev__ ChildIterator beginChild()
Definition: NanoVDB.h:2907
__hostdev__ bool isActive(const CoordT &ijk) const
Return true if the voxel value at the given coordinate is active.
Definition: NanoVDB.h:4468
__hostdev__ ConstDenseIterator cbeginDense() const
Definition: NanoVDB.h:3011
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4137
ChildT ChildNodeType
Definition: NanoVDB.h:2838
#define NANOVDB_MAGIC_GRID
Definition: NanoVDB.h:140
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4044
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:3410
typename BuildToValueMap< T >::type BuildToValueMapT
Definition: NanoVDB.h:574
__hostdev__ const MaskType< LOG2DIM > & valueMask() const
Return a const reference to the bit mask of active voxels in this leaf node.
Definition: NanoVDB.h:4373
void set(const MatT &mat, const MatT &invMat, const Vec3T &translate, double taper=1.0)
Initialize the member data from 3x3 or 4x4 matrices.
Definition: NanoVDB.h:1534
static __hostdev__ KeyT CoordToKey(const CoordType &ijk)
Definition: NanoVDB.h:2609
__hostdev__ void setAvg(float avg)
Definition: NanoVDB.h:3783
MaskT< LOG2DIM > ArrayType
Definition: NanoVDB.h:3969
T Type
Definition: NanoVDB.h:506
__hostdev__ bool isActive() const
Definition: NanoVDB.h:3369
uint64_t mMagic
Definition: NanoVDB.h:1924
__hostdev__ ChannelT * setChannel(uint32_t channelID)
Change to an internal channel, assuming it exists as as blind data in the IndexGrid.
Definition: NanoVDB.h:5733
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4043
__hostdev__ bool isOff() const
Return true if none of the bits are set in this Mask.
Definition: NanoVDB.h:1232
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:5512
__hostdev__ uint32_t valueCount() const
Definition: NanoVDB.h:4128
uint64_t mGridSize
Definition: NanoVDB.h:1930
__hostdev__ NodeT * probeChild(ValueType &value) const
Definition: NanoVDB.h:2987
RootT Node3
Definition: NanoVDB.h:2443
PointType
Definition: NanoVDB.h:388
__hostdev__ void toggle(uint32_t n)
Definition: NanoVDB.h:1315
Trait to map from LEVEL to node type.
Definition: NanoVDB.h:4625
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4086
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:2819
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4087
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:2258
__hostdev__ ValueIter()
Definition: NanoVDB.h:2917
__hostdev__ const char * shortGridName() const
Definition: NanoVDB.h:5525
#define NANOVDB_MINOR_VERSION_NUMBER
Definition: NanoVDB.h:147
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:4927
__hostdev__ WordT getWord(uint32_t n) const
Definition: NanoVDB.h:1175
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this leaf node.
Definition: NanoVDB.h:4386
uint64_t KeyT
Return a key based on the coordinates of a voxel.
Definition: NanoVDB.h:2607
Vec3d mVoxelSize
Definition: NanoVDB.h:1934
BuildT ValueType
Definition: NanoVDB.h:3651
uint64_t mFlags
Definition: NanoVDB.h:3178
__hostdev__ const uint32_t & getTableSize() const
Definition: NanoVDB.h:3037
int64_t mDataOffset
Definition: NanoVDB.h:1571
__hostdev__ ValueIterator()
Definition: NanoVDB.h:3347
__hostdev__ Checksum(uint32_t head, uint32_t tail)
Constructor that allows the two 32bit checksums to be initiated explicitly.
Definition: NanoVDB.h:1836
GridBlindMetaData()
Empty constructor.
Definition: NanoVDB.h:1581
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1123
__hostdev__ Mask()
Initialize all bits to zero.
Definition: NanoVDB.h:1151
__hostdev__ bool isCached2(const CoordType &ijk) const
Definition: NanoVDB.h:5118
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3519
Implements Tree::getNodeInfo(math::Coord)
Definition: NanoVDB.h:1786
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4134
__hostdev__ void setStdDeviationOn(bool on=true)
Definition: NanoVDB.h:1996
uint64_t voxelCount
Definition: NanoVDB.h:5846
__hostdev__ uint32_t gridCount() const
Return total number of grids in the buffer.
Definition: NanoVDB.h:2164
__hostdev__ bool isRootConnected() const
return true if RootData follows TreeData in memory without any extra padding
Definition: NanoVDB.h:2111
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:5590
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4847
__hostdev__ bool isValid() const
Methods related to the classification of this grid.
Definition: NanoVDB.h:2254
__hostdev__ void setValue(uint32_t n, const ValueT &v)
Definition: NanoVDB.h:3206
ValueType minimum
Definition: NanoVDB.h:6288
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:3020
ChildT UpperNodeType
Definition: NanoVDB.h:2841
uint32_t mGridCount
Definition: NanoVDB.h:1929
CoordT mBBoxMin
Definition: NanoVDB.h:3657
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:5508
__hostdev__ bool isActive() const
Definition: NanoVDB.h:4337
__hostdev__ const void * nodePtr() const
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:2035
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:6143
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:3691
__hostdev__ FloatType stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4389
Definition: NanoVDB.h:6285
__hostdev__ bool isValid() const
return true if the magic number and the version are both valid
Definition: NanoVDB.h:1979
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1727
uint64_t FloatType
Definition: NanoVDB.h:805
__hostdev__ RootT & root()
Definition: NanoVDB.h:2461
float mQuantum
Definition: NanoVDB.h:3741
char * strcpy(char *dst, const char *src)
Copy characters from src to dst.
Definition: Util.h:166
double FloatType
Definition: NanoVDB.h:823
static const int MaxNameSize
Definition: NanoVDB.h:1570
__hostdev__ bool isIndex(GridType gridType)
Return true if the GridType maps to a special index type (not a POD integer type).
Definition: NanoVDB.h:634
Map mMap
Definition: NanoVDB.h:1932
#define NANOVDB_MAGIC_FILE
Definition: NanoVDB.h:141
__hostdev__ ValueType minimum() const
Return a const reference to the minimum active value encoded in this leaf node.
Definition: NanoVDB.h:4377
float type
Definition: NanoVDB.h:549
__hostdev__ const uint32_t & tileCount() const
Return the number of tiles encoded in this root node.
Definition: NanoVDB.h:3036
__hostdev__ Vec3T applyJacobian(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1467
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:5517
Utility functions.
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2843
Bit-compacted representation of all three version numbers.
Definition: NanoVDB.h:708
__hostdev__ uint64_t lastOffset() const
Definition: NanoVDB.h:4111
__hostdev__ const GridType & gridType() const
Definition: NanoVDB.h:5506
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:5511
__hostdev__ util::enable_if< BuildTraits< T >::is_index, const uint64_t & >::type valueCount() const
Return the total number of values indexed by this IndexGrid.
Definition: NanoVDB.h:2171
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4841
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, ChannelT *channelPtr)
Ctor from an IndexGrid and an external channel.
Definition: NanoVDB.h:5701
__hostdev__ bool operator>=(const Version &rhs) const
Definition: NanoVDB.h:734
typename DataType::ValueT ValueType
Definition: NanoVDB.h:3275
typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1711
static __hostdev__ uint32_t dim()
Return the dimension, in index space, of this leaf node (typically 8 as for openvdb leaf nodes!) ...
Definition: NanoVDB.h:4415
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5269
typename NanoLeaf< BuildT >::ValueType Type
Definition: NanoVDB.h:6130
Definition: NanoVDB.h:3166
static __hostdev__ uint32_t dim()
Return the dimension, in voxel units, of this internal node (typically 8*16 or 8*16*32) ...
Definition: NanoVDB.h:3469
__hostdev__ bool operator==(const Mask &other) const
Definition: NanoVDB.h:1205
__hostdev__ uint32_t gridIndex() const
Definition: NanoVDB.h:5523
__hostdev__ ChildIter & operator++()
Definition: NanoVDB.h:2890
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2738
__hostdev__ void setDev(const FloatType &)
Definition: NanoVDB.h:4202
GridBlindMetaData(const GridBlindMetaData &other)
Copy constructor that resets mDataOffset and zeros out mName.
Definition: NanoVDB.h:1604
__hostdev__ TileIterator probe(const CoordT &ijk)
Definition: NanoVDB.h:2761
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4041
__hostdev__ const ChildT * probeChild(ValueType &value) const
Definition: NanoVDB.h:3431
__hostdev__ bool operator==(const Checksum &rhs) const
return true if the checksums are identical
Definition: NanoVDB.h:1883
__hostdev__ const NanoGrid< BuildT > & grid() const
Definition: NanoVDB.h:5564
char * strncpy(char *dst, const char *src, size_t max)
Copies the first num characters of src to dst. If the end of the source C string (which is signaled b...
Definition: Util.h:185
__hostdev__ DenseIterator beginAll() const
Definition: NanoVDB.h:1148
__hostdev__ ConstValueIterator cbeginValueAll() const
Definition: NanoVDB.h:2942
__hostdev__ bool isValid() const
return true if this meta data has a valid combination of semantic, class and value tags...
Definition: NanoVDB.h:1661
__hostdev__ void disable()
Definition: NanoVDB.h:1872
__hostdev__ const NanoGrid< IndexT > & grid() const
Return a const reference to the IndexGrid.
Definition: NanoVDB.h:5714
static constexpr uint32_t SIZE
Definition: NanoVDB.h:1049
uint32_t mNodeCount[3]
Definition: NanoVDB.h:2372
ValueType mMaximum
Definition: NanoVDB.h:3663
typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1712
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4025
__hostdev__ uint64_t blindDataSize() const
return size in bytes of the blind data represented by this blind meta data
Definition: NanoVDB.h:1693
static __hostdev__ CoordT KeyToCoord(const KeyT &key)
Definition: NanoVDB.h:2617
__hostdev__ const Map & map() const
Return a const reference to the Map for this grid.
Definition: NanoVDB.h:2193
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:4357
__hostdev__ void setRoot(const void *root)
Definition: NanoVDB.h:2377
__hostdev__ BaseIter()
Definition: NanoVDB.h:2863
static __hostdev__ uint32_t wordCount()
Return the number of machine words used by this Mask.
Definition: NanoVDB.h:1059
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:5518
__hostdev__ uint32_t operator*() const
Definition: NanoVDB.h:1122
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3999
typename DataType::BuildT BuildType
Definition: NanoVDB.h:2846
__hostdev__ void setMin(const bool &)
Definition: NanoVDB.h:3993
__hostdev__ bool isValid() const
Definition: NanoVDB.h:5505
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:2815
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4083
__hostdev__ uint32_t tail() const
Definition: NanoVDB.h:1858
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:3985
__hostdev__ bool updateBBox()
Updates the local bounding box of active voxels in this node. Return true if bbox was updated...
Definition: NanoVDB.h:4572
__hostdev__ DenseIter & operator++()
Definition: NanoVDB.h:2994
Return point to the root Tile where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6235
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:2270
__hostdev__ bool isPointData() const
Definition: NanoVDB.h:2262
__hostdev__ uint64_t last(uint32_t i) const
Definition: NanoVDB.h:4184
bool FloatType
Definition: NanoVDB.h:799
__hostdev__ TileT & operator*() const
Definition: NanoVDB.h:2713
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this internal node and an...
Definition: NanoVDB.h:3492
__hostdev__ Iterator & operator++()
Definition: NanoVDB.h:1097
Definition: NanoVDB.h:785
#define __hostdev__
Definition: Util.h:73
__hostdev__ const Checksum & checksum() const
Definition: NanoVDB.h:5534
typename DataType::FloatType FloatType
Definition: NanoVDB.h:4233
#define NANOVDB_DATA_ALIGNMENT
Definition: NanoVDB.h:133
typename DataType::Tile Tile
Definition: NanoVDB.h:2851
__hostdev__ bool isValid(const GridBlindDataClass &blindClass, const GridBlindDataSemantic &blindSemantics, const GridType &blindType)
return true if the combination of GridBlindDataClass, GridBlindDataSemantic and GridType is valid...
Definition: NanoVDB.h:667
__hostdev__ bool isBreadthFirst() const
Definition: NanoVDB.h:5521
__hostdev__ DenseIterator operator++(int)
Definition: NanoVDB.h:1130
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4117
__hostdev__ void setMax(const ValueT &v)
Definition: NanoVDB.h:3255
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:5515
Definition: NanoVDB.h:939
Coord CoordType
Definition: NanoVDB.h:4235
Dummy type for a 16bit quantization of float point values.
Definition: NanoVDB.h:190
uint8_t ArrayType
Definition: NanoVDB.h:3840
typename Mask< Log2Dim >::template Iterator< On > MaskIterT
Definition: NanoVDB.h:3285
__hostdev__ bool hasLongGridName() const
Definition: NanoVDB.h:2267
__hostdev__ TreeT & tree()
Return a non-const reference to the tree.
Definition: NanoVDB.h:2184
CoordT mBBoxMin
Definition: NanoVDB.h:4162
__hostdev__ void setFirstNode(const NodeT *node)
Definition: NanoVDB.h:2389
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1747
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3489
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3815
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3197
__hostdev__ void setMin(float min)
Definition: NanoVDB.h:3777
__hostdev__ void setValue(uint32_t offset, bool)
Definition: NanoVDB.h:4040
__hostdev__ void setMax(const ValueType &v)
Definition: NanoVDB.h:3701
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4048
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3671
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:2868
__hostdev__ ChildIter()
Definition: NanoVDB.h:3305
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4199
__hostdev__ BlindDataT * getBlindData(uint32_t n)
Definition: NanoVDB.h:2326
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:3257
__hostdev__ ValueType getLastValue() const
If the last entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getLastValue() on the child.
Definition: NanoVDB.h:3512
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5068
__hostdev__ bool isOn(uint32_t n) const
Return true if the given bit is set.
Definition: NanoVDB.h:1217
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3913
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 64bit floating point arithmet...
Definition: NanoVDB.h:1507
uint64_t ValueType
Definition: NanoVDB.h:4065
uint16_t ArrayType
Definition: NanoVDB.h:3870
__hostdev__ const MaskType< LOG2DIM > & childMask() const
Return a const reference to the bit mask of child nodes in this internal node.
Definition: NanoVDB.h:3479
MatType scale(const Vec3< typename MatType::value_type > &s)
Return a matrix that scales by s.
Definition: Mat.h:615
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4084
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4085
ValueT value
Definition: NanoVDB.h:2670
__hostdev__ void setDev(float dev)
Definition: NanoVDB.h:3786
Node caching at all (three) tree levels.
Definition: NanoVDB.h:5220
__hostdev__ void setDev(const StatsT &v)
Definition: NanoVDB.h:2821
__hostdev__ OnIterator beginOn() const
Definition: NanoVDB.h:1144
Definition: NanoVDB.h:1774
__hostdev__ void setAvg(const FloatType &)
Definition: NanoVDB.h:4201
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5343
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:4192
bool Type
Definition: NanoVDB.h:6170
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1726
BuildT BuildType
Definition: NanoVDB.h:5244
Stuct with all the member data of the LeafNode (useful during serialization of an openvdb LeafNode) ...
Definition: NanoVDB.h:3647
__hostdev__ const BBoxType & bbox() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:3027
GridBlindDataSemantic
Blind-data Semantics that are currently understood by NanoVDB.
Definition: NanoVDB.h:411
Version mVersion
Definition: NanoVDB.h:1926
__hostdev__ void setAverageOn(bool on=true)
Definition: NanoVDB.h:1995
__hostdev__ bool isSequential() const
return true if nodes at all levels can safely be accessed with simple linear offsets ...
Definition: NanoVDB.h:2283
__hostdev__ Map()
Default constructor for the identity map.
Definition: NanoVDB.h:1407
GridFlags
Grid flags which indicate what extra information is present in the grid buffer.
Definition: NanoVDB.h:320
Metafunction used to determine if the first template parameter is a specialization of the class templ...
Definition: Util.h:484
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3881
__hostdev__ uint32_t & checksum(int i)
Definition: NanoVDB.h:1850
__hostdev__ DenseIterator()
Definition: NanoVDB.h:3420
uint32_t nameSize
Definition: NanoVDB.h:5852
ReadAccessor< ValueT, LEVEL0, LEVEL1, LEVEL2 > createAccessor(const NanoGrid< ValueT > &grid)
Free-standing function for convenient creation of a ReadAccessor with optional and customizable node ...
Definition: NanoVDB.h:5438
RootT RootType
Definition: NanoVDB.h:2433
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3751
Definition: GridHandle.h:27
float type
Definition: NanoVDB.h:542
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Definition: NanoVDB.h:5532
CoordBBox bbox
Definition: NanoVDB.h:6290
float Type
Definition: NanoVDB.h:548
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Return true if this tree is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2478
Visits all tile values and child nodes of this node.
Definition: NanoVDB.h:3414
GridType mGridType
Definition: NanoVDB.h:1936
__hostdev__ uint64_t gridSize() const
Definition: NanoVDB.h:5522
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:4933
Definition: NanoVDB.h:1080
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3521
GridType gridType
Definition: NanoVDB.h:5847
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4075
Define static boolean tests for template build types.
Definition: NanoVDB.h:484
__hostdev__ bool isFull() const
return true if the 64 bit checksum is fill, i.e. of both had and nodes
Definition: NanoVDB.h:1867
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:2265
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:2908
char * sprint(char *dst, T var1, Types...var2)
prints a variable number of string and/or numbers to a destination string
Definition: Util.h:286
Bit-mask to encode active states and facilitate sequential iterators and a fast codec for I/O compres...
Definition: NanoVDB.h:1046
CoordT CoordType
Definition: NanoVDB.h:4908
__hostdev__ const GridBlindMetaData * blindMetaData(uint32_t n) const
Returns a const reference to the blindMetaData at the specified linear offset.
Definition: NanoVDB.h:2067
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:4953
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3239
static ElementType scalar(const T &v)
Definition: NanoVDB.h:779
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:3376
__hostdev__ void setMax(float max)
Definition: NanoVDB.h:3780
__hostdev__ TileIter()
Definition: NanoVDB.h:2696
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3814
__hostdev__ bool getMax() const
Definition: NanoVDB.h:3984
uint64_t mData2
Definition: NanoVDB.h:1941
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:2599
float mMinimum
Definition: NanoVDB.h:3740
__hostdev__ uint64_t offset() const
Definition: NanoVDB.h:4181
const std::enable_if<!VecTraits< T >::IsVec, T >::type & min(const T &a, const T &b)
Definition: Composite.h:106
static __hostdev__ Coord OffsetToLocalCoord(uint32_t n)
Definition: NanoVDB.h:3544
__hostdev__ const Vec3d & voxelSize() const
Return a vector of the axial voxel sizes.
Definition: NanoVDB.h:5720
__hostdev__ constexpr uint32_t strlen()
return the number of characters (including null termination) required to convert enum type to a strin...
Definition: NanoVDB.h:204
typename NanoLeaf< BuildT >::FloatType FloatType
Definition: NanoVDB.h:6284
Definition: NanoVDB.h:4061
KeyT key
Definition: NanoVDB.h:2667
__hostdev__ bool isChild() const
Definition: NanoVDB.h:2728
uint64_t FloatType
Definition: NanoVDB.h:811
__hostdev__ uint64_t pointCount() const
Definition: NanoVDB.h:4182
typename DataType::StatsT FloatType
Definition: NanoVDB.h:3276
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:5130
__hostdev__ uint32_t & tail()
Definition: NanoVDB.h:1859
bool ValueType
Definition: NanoVDB.h:3966
__hostdev__ Tile * probeTile(const CoordT &ijk)
Definition: NanoVDB.h:2777
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4115
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:4853
__hostdev__ uint64_t checksum() const
return the 64 bit checksum of this instance
Definition: NanoVDB.h:1848
Dummy type for a voxel whose value equals an offset into an external value array of active values...
Definition: NanoVDB.h:175
__hostdev__ ValueOnIterator beginValueOn() const
Definition: NanoVDB.h:4272
Top-most node of the VDB tree structure.
Definition: NanoVDB.h:2834
int64_t child
Definition: NanoVDB.h:3169
#define NANOVDB_MAJOR_VERSION_NUMBER
Definition: NanoVDB.h:146
__hostdev__ Vec3T applyJacobianF(const Vec3T &ijk) const
Apply the linear forward 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1476
Index64 memUsage(const TreeT &tree, bool threaded=true)
Return the total amount of memory in bytes occupied by this tree.
Definition: Count.h:493
uint8_t ArrayType
Definition: NanoVDB.h:3803
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3675
Struct to derive node type from its level in a given grid, tree or root while preserving constness...
Definition: NanoVDB.h:1704
typename GridT::TreeType type
Definition: NanoVDB.h:2408
__hostdev__ Codec toCodec(const char *str)
Definition: NanoVDB.h:5807
Definition: NanoVDB.h:2875
uint32_t level
Definition: NanoVDB.h:6287
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3851
uint32_t mTileCount[3]
Definition: NanoVDB.h:2373
typename RootT::ChildNodeType Node2
Definition: NanoVDB.h:2444
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3402
__hostdev__ const ValueType & minimum() const
Return a const reference to the minimum active value encoded in this internal node and any of its chi...
Definition: NanoVDB.h:3486
ValueType mMinimum
Definition: NanoVDB.h:3662
__hostdev__ const void * blindData(uint32_t n) const
Returns a const pointer to the blindData at the specified linear offset.
Definition: NanoVDB.h:2311
__hostdev__ GridType toGridType()
Maps from a templated build type to a GridType enum.
Definition: NanoVDB.h:830
size_t strlen(const char *str)
length of a c-sting, excluding &#39;\0&#39;.
Definition: Util.h:153
static __hostdev__ uint32_t dim()
Definition: NanoVDB.h:4228
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:5330
uint64_t Type
Definition: NanoVDB.h:513
__hostdev__ uint32_t blindDataCount() const
Definition: NanoVDB.h:5530
uint64_t type
Definition: NanoVDB.h:521
const typename NanoRoot< BuildT >::Tile * Type
Definition: NanoVDB.h:6237
__hostdev__ float getDev() const
return the quantized standard deviation of the active values in this node
Definition: NanoVDB.h:3774
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3397
ValueT mMaximum
Definition: NanoVDB.h:3183
__hostdev__ uint64_t idx(int i, int j, int k) const
Definition: NanoVDB.h:5740
static __hostdev__ CoordT OffsetToLocalCoord(uint32_t n)
Compute the local coordinates from a linear offset.
Definition: NanoVDB.h:4399
__hostdev__ const math::BBox< CoordType > & bbox() const
Return a const reference to the bounding box in index space of active values in this internal node an...
Definition: NanoVDB.h:3501
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel.
Definition: NanoVDB.h:3518
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this root node...
Definition: NanoVDB.h:3052
__hostdev__ uint64_t getValue(uint32_t i) const
Definition: NanoVDB.h:4185
__hostdev__ bool operator<=(const Version &rhs) const
Definition: NanoVDB.h:732
__hostdev__ bool getValue(uint32_t i) const
Definition: NanoVDB.h:3982
T ElementType
Definition: NanoVDB.h:767
bool Type
Definition: NanoVDB.h:6248
typename RootType::LeafNodeType LeafNodeType
Definition: NanoVDB.h:2134
float Type
Definition: NanoVDB.h:562
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:4966
__hostdev__ auto pos() const
Definition: NanoVDB.h:2707
uint64_t Type
Definition: NanoVDB.h:520
__hostdev__ uint32_t getMinor() const
Definition: NanoVDB.h:737
Struct with all the member data of the RootNode (useful during serialization of an openvdb RootNode) ...
Definition: NanoVDB.h:2597
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3331
Data encoded at the head of each segment of a file or stream.
Definition: NanoVDB.h:5819
__hostdev__ ValueIterator operator++(int)
Definition: NanoVDB.h:4348
__hostdev__ int findBlindDataForSemantic(GridBlindDataSemantic semantic) const
Return the index of the first blind data with specified semantic if found, otherwise -1...
Definition: NanoVDB.h:2339
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3746
__hostdev__ ValueOffIterator(const LeafNode *parent)
Definition: NanoVDB.h:4287
openvdb::GridBase Grid
Definition: Utils.h:43
__hostdev__ Mask(bool on)
Definition: NanoVDB.h:1156
__hostdev__ void setOff(uint32_t n)
Set the specified bit off.
Definition: NanoVDB.h:1243
__hostdev__ const char * gridName() const
Return a c-string with the name of this grid.
Definition: NanoVDB.h:2286
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4293
__hostdev__ bool isFogVolume() const
Definition: NanoVDB.h:5509
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2435
double FloatType
Definition: NanoVDB.h:793
Version version
Definition: NanoVDB.h:5857
__hostdev__ ChildT * getChild(const Tile *tile)
Returns a const reference to the child node in the specified tile.
Definition: NanoVDB.h:2802
__hostdev__ const Checksum & checksum() const
Return checksum of the grid buffer.
Definition: NanoVDB.h:2292
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5056
GridClass mGridClass
Definition: NanoVDB.h:1935
__hostdev__ Version(uint32_t data)
Constructor from a raw uint32_t data representation.
Definition: NanoVDB.h:721
Dummy type for a voxel whose value equals an offset into an external value array. ...
Definition: NanoVDB.h:172
Maps one type (e.g. the build types above) to other (actual) types.
Definition: NanoVDB.h:504
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2456
__hostdev__ GridClass toGridClass(GridBlindDataSemantic semantics, GridClass defaultClass=GridClass::Unknown)
Maps from GridBlindDataSemantic to GridClass.
Definition: NanoVDB.h:431
__hostdev__ uint32_t nodeCount(uint32_t level) const
Definition: NanoVDB.h:5533
__hostdev__ ValueType getLastValue() const
Return the last value in this leaf node.
Definition: NanoVDB.h:4454
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:3789
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4962
__hostdev__ bool isHalf() const
Definition: NanoVDB.h:1864
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4116
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3980
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:2629
__hostdev__ ValueIterator(const LeafNode *parent)
Definition: NanoVDB.h:4320
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4814
typename RootT::ValueType ValueType
Definition: NanoVDB.h:4808
__hostdev__ DataT * data() const
Definition: NanoVDB.h:2723
__hostdev__ uint32_t id() const
Definition: NanoVDB.h:735
__hostdev__ size_t memUsage() const
Definition: NanoVDB.h:3911
uint16_t blindDataCount
Definition: NanoVDB.h:5856
__hostdev__ ValueType getMin() const
Definition: NanoVDB.h:4194
__hostdev__ const NodeT * getFirstNode() const
return a const pointer to the first node of the specified type
Definition: NanoVDB.h:2535
typename ChildT::CoordType CoordType
Definition: NanoVDB.h:2848
__hostdev__ uint32_t getPatch() const
Definition: NanoVDB.h:738
Definition: NanoVDB.h:2945
__hostdev__ DenseIter(RootT *parent)
Definition: NanoVDB.h:2986
__hostdev__ const FloatType & stdDeviation() const
Return a const reference to the standard deviation of all the active values encoded in this internal ...
Definition: NanoVDB.h:3498
__hostdev__ void setOn(uint32_t n)
Set the specified bit on.
Definition: NanoVDB.h:1241
__hostdev__ const uint64_t & firstOffset() const
Definition: NanoVDB.h:4082
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4265
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Apply the linear inverse 3x3 transformation to an input 3d vector using 32bit floating point arithmet...
Definition: NanoVDB.h:1516
__hostdev__ bool isCompatible() const
Definition: NanoVDB.h:739
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4957
__hostdev__ ValueType getValue(const CoordType &ijk) const
Definition: NanoVDB.h:5126
static __hostdev__ constexpr uint8_t bitWidth()
Definition: NanoVDB.h:3850
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5133
__hostdev__ const ValueType & background() const
Return a const reference to the background value.
Definition: NanoVDB.h:2481
const typename GridOrTreeOrRootT::LeafNodeType type
Definition: NanoVDB.h:1719
__hostdev__ int age() const
Returns the difference between major version of this instance and NANOVDB_MAJOR_VERSION_NUMBER.
Definition: NanoVDB.h:743
__hostdev__ bool isRootNext() const
return true if RootData is layout out immediately after TreeData in memory
Definition: NanoVDB.h:2398
__hostdev__ NodeT * getFirstNode()
return a pointer to the first node of the specified type
Definition: NanoVDB.h:2525
__hostdev__ const NodeTrait< RootT, 2 >::type * getFirstUpper() const
Definition: NanoVDB.h:2565
CheckMode
List of different modes for computing for a checksum.
Definition: NanoVDB.h:1791
__hostdev__ void setAvg(const FloatType &v)
Definition: NanoVDB.h:3702
__hostdev__ uint8_t bitWidth() const
Definition: NanoVDB.h:3910
__hostdev__ const FloatType & average() const
Return a const reference to the average of all the active values encoded in this root node and any of...
Definition: NanoVDB.h:3046
__hostdev__ ValueIter operator++(int)
Definition: NanoVDB.h:2930
bool isValid() const
Definition: NanoVDB.h:5824
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:3191
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4327
uint16_t gridCount
Definition: NanoVDB.h:5822
__hostdev__ void extrema(ValueType &min, ValueType &max) const
Sets the extrema values of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2585
__hostdev__ uint32_t pos() const
Definition: NanoVDB.h:1095
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3761
__hostdev__ T & getValue(const math::Coord &ijk, T *channelPtr) const
Return the value from a specified channel that maps to the specified coordinate.
Definition: NanoVDB.h:5759
typename Node2::ChildNodeType Node1
Definition: NanoVDB.h:2445
Dummy type for a 16 bit floating point values (placeholder for IEEE 754 Half)
Definition: NanoVDB.h:181
static __hostdev__ uint64_t memUsage()
return memory usage in bytes for the class
Definition: NanoVDB.h:2459
RootT RootNodeType
Definition: NanoVDB.h:2434
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Definition: NanoVDB.h:2018
__hostdev__ uint64_t first(uint32_t i) const
Definition: NanoVDB.h:4183
__hostdev__ bool hasStdDeviation() const
Definition: NanoVDB.h:5520
__hostdev__ bool isGridIndex() const
Definition: NanoVDB.h:2261
__hostdev__ ReadAccessor(const TreeT &tree)
Constructor from a tree.
Definition: NanoVDB.h:5062
__hostdev__ NodeT * child() const
Definition: NanoVDB.h:2743
uint32_t countOn(uint64_t v)
Definition: Util.h:656
__hostdev__ ChannelAccessor(const NanoGrid< IndexT > &grid, uint32_t channelID=0u)
Ctor from an IndexGrid and an integer ID of an internal channel that is assumed to exist as blind dat...
Definition: NanoVDB.h:5690
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:5568
void ArrayType
Definition: NanoVDB.h:4067
__hostdev__ ChannelT & operator()(const math::Coord &ijk) const
Definition: NanoVDB.h:5744
__hostdev__ uint32_t countOn(uint32_t i) const
Return the number of lower set bits in mask up to but excluding the i&#39;th bit.
Definition: NanoVDB.h:1071
__hostdev__ ChildIter()
Definition: NanoVDB.h:2883
__hostdev__ bool hasStats() const
Definition: NanoVDB.h:4080
Definition: NanoVDB.h:1568
__hostdev__ uint64_t memUsage() const
Return the actual memory footprint of this root node.
Definition: NanoVDB.h:3058
int64_t child
Definition: NanoVDB.h:2668
__hostdev__ void fill(const ValueType &v)
Definition: NanoVDB.h:3711
__hostdev__ bool getAvg() const
Definition: NanoVDB.h:4038
BuildT ArrayType
Definition: NanoVDB.h:3654
uint32_t mBlindMetadataCount
Definition: NanoVDB.h:1938
__hostdev__ OffIterator beginOff() const
Definition: NanoVDB.h:1146
__hostdev__ DenseIterator beginDense()
Definition: NanoVDB.h:3010
BuildT BuildType
Definition: NanoVDB.h:4906
Version version
Definition: NanoVDB.h:5821
__hostdev__ bool getMin() const
Definition: NanoVDB.h:3983
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:4370
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4851
__hostdev__ bool getMax() const
Definition: NanoVDB.h:4037
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2956
bool BuildType
Definition: NanoVDB.h:3967
__hostdev__ CoordT origin() const
Definition: NanoVDB.h:2666
__hostdev__ bool operator<(const Version &rhs) const
Definition: NanoVDB.h:731
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:5134
__hostdev__ const Vec3dBBox & worldBBox() const
return AABB of active values in world space
Definition: NanoVDB.h:2093
__hostdev__ void setOn(uint32_t offset)
Definition: NanoVDB.h:3992
__hostdev__ uint8_t flags() const
Definition: NanoVDB.h:4391
__hostdev__ const ValueT & getMax() const
Definition: NanoVDB.h:3242
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2133
__hostdev__ bool isEmpty() const
Return true if this RootNode is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:3061
VecT< GridHandleT > readUncompressedGrids(const char *fileName, const typename GridHandleT::BufferType &buffer=typename GridHandleT::BufferType())
Read a multiple un-compressed NanoVDB grids from a file and return them as a vector.
Definition: NanoVDB.h:6059
uint64_t Type
Definition: NanoVDB.h:569
CoordT mBBoxMin
Definition: NanoVDB.h:4070
__hostdev__ uint32_t checksum(int i) const
Definition: NanoVDB.h:1852
__hostdev__ bool operator!=(const Mask &other) const
Definition: NanoVDB.h:1214
CoordT CoordType
Definition: NanoVDB.h:5038
Dummy type for a variable bit quantization of floating point values.
Definition: NanoVDB.h:193
__hostdev__ Vec3T indexToWorldF(const Vec3T &xyz) const
index to world space transformation
Definition: NanoVDB.h:2224
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:5510
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:5519
__hostdev__ const MaskType< LOG2DIM > & getChildMask() const
Definition: NanoVDB.h:3480
StatsT mAverage
Definition: NanoVDB.h:2635
Visits all values in a leaf node, i.e. both active and inactive values.
Definition: NanoVDB.h:4309
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:3254
__hostdev__ bool hasAverage() const
Definition: NanoVDB.h:2268
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3368
__hostdev__ bool isActive() const
Definition: NanoVDB.h:2665
Visits active tile values of this node only.
Definition: NanoVDB.h:3380
__hostdev__ const NodeTrait< RootT, LEVEL >::type * getFirstNode() const
return a const pointer to the first node of the specified level
Definition: NanoVDB.h:2554
#define NANOVDB_HOSTDEV_DISABLE_WARNING
Definition: Util.h:94
__hostdev__ void setValueOnly(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3680
Visits all inactive values in a leaf node.
Definition: NanoVDB.h:4276
__hostdev__ const TreeType & tree() const
Return a const reference to the tree of the IndexGrid.
Definition: NanoVDB.h:5717
typename RootT::ValueType ValueType
Definition: NanoVDB.h:2438
typename NodeTrait< GridOrTreeOrRootT, LEVEL >::type NodeTraitT
Definition: NanoVDB.h:1767
static __hostdev__ uint64_t memUsage(uint32_t tableSize)
Return the expected memory footprint in bytes with the specified number of tiles. ...
Definition: NanoVDB.h:3055
Definition: NanoVDB.h:1776
__hostdev__ ValueIterator beginValue() const
Definition: NanoVDB.h:4356
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:4029
GridMetaData(const NanoGrid< T > &grid)
Definition: NanoVDB.h:5470
float FloatType
Definition: NanoVDB.h:787
__hostdev__ CheckMode mode() const
return the mode of the 64 bit checksum
Definition: NanoVDB.h:1875
__hostdev__ bool isMask() const
Definition: NanoVDB.h:2263
__hostdev__ Vec3T applyJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:2020
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4960
__hostdev__ void setMax(const bool &)
Definition: NanoVDB.h:3994
typename RootNodeType::ChildNodeType UpperNodeType
Definition: NanoVDB.h:2132
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:2600
typename BuildT::ValueType ValueType
Definition: NanoVDB.h:2136
__hostdev__ uint32_t nodeCount() const
Return number of nodes at LEVEL.
Definition: NanoVDB.h:2058
float ValueType
Definition: NanoVDB.h:3732
__hostdev__ Mask & operator&=(const Mask &other)
Bitwise intersection.
Definition: NanoVDB.h:1318
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5139
__hostdev__ const TreeT & tree() const
Return a const reference to the tree.
Definition: NanoVDB.h:2181
__hostdev__ bool safeCast() const
return true if the RootData follows right after the TreeData. If so, this implies that it&#39;s safe to c...
Definition: NanoVDB.h:5493
uint32_t findLowestOn(uint32_t v)
Returns the index of the lowest, i.e. least significant, on bit in the specified 32 bit word...
Definition: Util.h:536
__hostdev__ ChannelT & getValue(const math::Coord &ijk) const
Return the value from a cached channel that maps to the specified coordinate.
Definition: NanoVDB.h:5743
uint64_t mData1
Definition: NanoVDB.h:1940
BitFlags(Type mask)
Definition: NanoVDB.h:947
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:3442
bool streq(const char *lhs, const char *rhs)
Test if two null-terminated byte strings are the same.
Definition: Util.h:268
__hostdev__ Vec3T worldToIndexDirF(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2234
__hostdev__ BaseIter(DataT *data)
Definition: NanoVDB.h:2864
__hostdev__ Iterator()
Definition: NanoVDB.h:1083
typename ChildT::template MaskType< LOG2DIM > MaskT
Definition: NanoVDB.h:3163
__hostdev__ uint64_t getDev() const
Definition: NanoVDB.h:4136
BitFlags< 32 > mFlags
Definition: NanoVDB.h:1927
__hostdev__ Vec3T applyInverseJacobianF(const Vec3T &xyz) const
Definition: NanoVDB.h:2022
__hostdev__ ValueOnIter operator++(int)
Definition: NanoVDB.h:2963
uint8_t mFlags
Definition: NanoVDB.h:4164
__hostdev__ void setAvg(const bool &)
Definition: NanoVDB.h:3995
__hostdev__ void setMin(const ValueType &v)
Definition: NanoVDB.h:3700
__hostdev__ bool getMin() const
Definition: NanoVDB.h:4036
__hostdev__ bool isStaggered() const
Definition: NanoVDB.h:2259
__hostdev__ ConstChildIterator cbeginChild() const
Definition: NanoVDB.h:3338
__hostdev__ bool isEmpty() const
test if the grid is empty, e.i the root table has size 0
Definition: NanoVDB.h:2107
__hostdev__ uint64_t gridPoints(const AttT *&begin, const AttT *&end) const
Return the total number of point in the grid and set the iterators to the complete range of points...
Definition: NanoVDB.h:5635
__hostdev__ NodeT * operator->() const
Definition: NanoVDB.h:3321
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3738
#define __device__
Definition: Util.h:79
__hostdev__ Vec3T applyInverseMap(const Vec3T &xyz) const
Definition: NanoVDB.h:2007
int64_t mBlindMetadataOffset
Definition: NanoVDB.h:1937
float mTaperF
Definition: NanoVDB.h:1400
Implements Tree::probeLeaf(math::Coord)
Definition: NanoVDB.h:1784
__hostdev__ ChildIter(RootT *parent)
Definition: NanoVDB.h:2884
__hostdev__ void setValue(uint32_t offset, const ValueType &value)
Definition: NanoVDB.h:3681
__hostdev__ CoordBBox bbox() const
Return the index bounding box of all the active values in this tree, i.e. in all nodes of the tree...
Definition: NanoVDB.h:2395
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4073
typename RootT::CoordType CoordType
Definition: NanoVDB.h:4809
__hostdev__ MagicType toMagic(uint64_t magic)
maps 64 bits of magic number to enum
Definition: NanoVDB.h:359
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:3466
GridClass gridClass
Definition: NanoVDB.h:5848
__hostdev__ bool isLevelSet() const
Definition: NanoVDB.h:2257
Codec codec
Definition: NanoVDB.h:5823
__hostdev__ ChildT * probeChild(const CoordT &ijk)
Definition: NanoVDB.h:2788
RootType RootNodeType
Definition: NanoVDB.h:2131
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache.
Definition: NanoVDB.h:5301
static __hostdev__ uint64_t memUsage()
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2090
uint64_t gridSize
Definition: NanoVDB.h:5846
__hostdev__ void setValueOnly(const CoordT &ijk, const ValueType &v)
Definition: NanoVDB.h:4465
__hostdev__ const NodeT * getNode() const
Return a const point to the cached node of the specified type.
Definition: NanoVDB.h:5285
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3875
__hostdev__ bool isFloatingPoint(GridType gridType)
return true if the GridType maps to a floating point type
Definition: NanoVDB.h:595
CoordT mBBoxMin
Definition: NanoVDB.h:3972
__hostdev__ void setOff()
Set all bits off.
Definition: NanoVDB.h:1298
__hostdev__ void localToGlobalCoord(Coord &ijk) const
modifies local coordinates to global coordinates of a tile or child node
Definition: NanoVDB.h:3552
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:2820
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3363
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3522
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
return the state and updates the value of the specified voxel
Definition: NanoVDB.h:3068
__hostdev__ Vec3T indexToWorldGrad(const Vec3T &grad) const
transform the gradient from index space to world space.
Definition: NanoVDB.h:2216
__hostdev__ uint64_t activeVoxelCount() const
Computes a AABB of active values in world space.
Definition: NanoVDB.h:2251
__hostdev__ const LeafNode * probeLeaf(const CoordT &) const
Definition: NanoVDB.h:4489
__hostdev__ uint64_t * words()
Return a pointer to the list of words of the bit mask.
Definition: NanoVDB.h:1171
__hostdev__ void init(float min, float max, uint8_t bitWidth)
Definition: NanoVDB.h:3755
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:4850
__hostdev__ Version(uint32_t major, uint32_t minor, uint32_t patch)
Constructor from major.minor.patch version numbers.
Definition: NanoVDB.h:723
__hostdev__ Mask & operator|=(const Mask &other)
Bitwise union.
Definition: NanoVDB.h:1326
static __hostdev__ uint32_t voxelCount()
Return the total number of voxels (e.g. values) encoded in this leaf node.
Definition: NanoVDB.h:4432
typename ChildT::BuildType BuildT
Definition: NanoVDB.h:3160
__hostdev__ DataType * data()
Definition: NanoVDB.h:4368
__hostdev__ Mask & operator-=(const Mask &other)
Bitwise difference.
Definition: NanoVDB.h:1334
__hostdev__ Checksum(uint64_t checksum, CheckMode mode=CheckMode::Full)
Definition: NanoVDB.h:1841
static __hostdev__ bool safeCast(const NanoGrid< T > &grid)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5504
__hostdev__ ValueIterator beginValue()
Definition: NanoVDB.h:2941
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5274
uint64_t mPointCount
Definition: NanoVDB.h:4168
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:4848
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4079
__hostdev__ CoordType origin() const
Return the origin in index space of this leaf node.
Definition: NanoVDB.h:3483
__hostdev__ DenseIterator(uint32_t pos=Mask::SIZE)
Definition: NanoVDB.h:1117
__hostdev__ CoordT getCoord() const
Definition: NanoVDB.h:4298
PointAccessor(const NanoGrid< BuildT > &grid)
Definition: NanoVDB.h:5551
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Apply the transposed inverse 3x3 transformation to an input 3d vector using 64bit floating point arit...
Definition: NanoVDB.h:1525
__hostdev__ void setValueOnly(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4186
__hostdev__ uint64_t getIndex(const math::Coord &ijk) const
Return the linear offset into a channel that maps to the specified coordinate.
Definition: NanoVDB.h:5739
ValueT mMinimum
Definition: NanoVDB.h:3182
__hostdev__ bool setGridName(const char *src)
Definition: NanoVDB.h:1997
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3975
__hostdev__ void localToGlobalCoord(Coord &ijk) const
Converts (in place) a local index coordinate to a global index coordinate.
Definition: NanoVDB.h:4407
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5251
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4205
uint64_t ValueType
Definition: NanoVDB.h:4156
Dummy type for a 8bit quantization of float point values.
Definition: NanoVDB.h:187
__hostdev__ DataType * data()
Definition: NanoVDB.h:2454
typename NanoLeaf< BuildT >::ValueType ValueType
Definition: NanoVDB.h:6283
MagicType
Enums used to identify magic numbers recognized by NanoVDB.
Definition: NanoVDB.h:350
__hostdev__ uint32_t getDim(const CoordType &ijk, const RayT &ray) const
Definition: NanoVDB.h:5384
__hostdev__ bool isValueOn() const
Definition: NanoVDB.h:2993
Dummy type for a voxel whose value equals its binary active state.
Definition: NanoVDB.h:178
uint8_t mFlags
Definition: NanoVDB.h:4072
uint64_t mPrefixSum
Definition: NanoVDB.h:4074
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:2871
__hostdev__ Vec3T applyJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:2009
__hostdev__ util::enable_if< util::is_same< T, Point >::value, const uint64_t & >::type pointCount() const
Return the total number of points indexed by this PointGrid.
Definition: NanoVDB.h:2178
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:5079
typename util::match_const< ChildT, DataT >::type NodeT
Definition: NanoVDB.h:2692
Definition: IndexIterator.h:43
__hostdev__ ChildIter(ParentT *parent)
Definition: NanoVDB.h:3310
uint32_t mGridIndex
Definition: NanoVDB.h:1928
__hostdev__ ValueOnIterator(const LeafNode *parent)
Definition: NanoVDB.h:4254
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:5132
uint64_t mVoxelCount
Definition: NanoVDB.h:2374
static __hostdev__ uint32_t CoordToOffset(const CoordType &ijk)
Return the linear offset corresponding to the given coordinate.
Definition: NanoVDB.h:3536
Vec3d voxelSize
Definition: NanoVDB.h:5851
__hostdev__ uint32_t nodeCount() const
Definition: NanoVDB.h:2504
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:5131
uint64_t type
Definition: NanoVDB.h:514
GridBlindDataSemantic mSemantic
Definition: NanoVDB.h:1574
__hostdev__ Vec3T applyMap(const Vec3T &ijk) const
Apply the forward affine transformation to a vector using 64bit floating point arithmetics.
Definition: NanoVDB.h:1450
CoordT CoordType
Definition: NanoVDB.h:5246
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3407
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3677
__hostdev__ const CoordBBox & indexBBox() const
return AABB of active values in index space
Definition: NanoVDB.h:2096
__hostdev__ bool isFloatingPointVector(GridType gridType)
return true if the GridType maps to a floating point vec3.
Definition: NanoVDB.h:609
ValueT mBackground
Definition: NanoVDB.h:2632
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1748
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:3386
__hostdev__ bool isInteger(GridType gridType)
Return true if the GridType maps to a POD integer type.
Definition: NanoVDB.h:621
__hostdev__ AccessorType getAccessor() const
Definition: NanoVDB.h:2465
__hostdev__ uint64_t leafPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
Return the number of points in the leaf node containing the coordinate ijk. If this return value is l...
Definition: NanoVDB.h:5645
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findPrev(uint32_t start) const
Definition: NanoVDB.h:1376
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:3066
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:2922
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4939
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType type
Definition: NanoVDB.h:1734
Codec codec
Definition: NanoVDB.h:5855
Defines an affine transform and its inverse represented as a 3x3 matrix and a vec3 translation...
Definition: NanoVDB.h:1395
uint64_t FloatType
Definition: NanoVDB.h:4066
float Type
Definition: NanoVDB.h:541
__hostdev__ void clear()
Reset this access to its initial state, i.e. with an empty cache Noop since this template specializa...
Definition: NanoVDB.h:4833
__hostdev__ const Tile * probeTile(const CoordT &ijk) const
Definition: NanoVDB.h:2783
#define NANOVDB_ASSERT(x)
Definition: Util.h:50
char mName[MaxNameSize]
Definition: NanoVDB.h:1577
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType type
Definition: NanoVDB.h:1741
GridType
List of types that are currently supported by NanoVDB.
Definition: NanoVDB.h:214
Vec3dBBox worldBBox
Definition: NanoVDB.h:5849
uint32_t mValueSize
Definition: NanoVDB.h:1573
typename BuildT::CoordType CoordType
Definition: NanoVDB.h:2138
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:4273
GridBlindMetaData(int64_t dataOffset, uint64_t valueCount, uint32_t valueSize, GridBlindDataSemantic semantic, GridBlindDataClass dataClass, GridType dataType)
Definition: NanoVDB.h:1592
__hostdev__ DenseIterator & operator++()
Definition: NanoVDB.h:1125
__hostdev__ void setOn()
Set all bits on.
Definition: NanoVDB.h:1292
__hostdev__ FloatType average() const
Return a const reference to the average of all the active values encoded in this leaf node...
Definition: NanoVDB.h:4383
Class to access points at a specific voxel location.
Definition: NanoVDB.h:5544
__hostdev__ Mask & operator^=(const Mask &other)
Bitwise XOR.
Definition: NanoVDB.h:1342
static __hostdev__ bool safeCast(const GridData *gridData)
return true if it is safe to cast the grid to a pointer of type GridMetaData, i.e. construction can be avoided.
Definition: NanoVDB.h:5497
ValueT mMaximum
Definition: NanoVDB.h:2634
static __hostdev__ uint64_t alignmentPadding(const void *p)
return the smallest number of bytes that when added to the specified pointer results in a 32 byte ali...
Definition: NanoVDB.h:582
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:4913
__hostdev__ Iterator operator++(int)
Definition: NanoVDB.h:1102
ValueT mMinimum
Definition: NanoVDB.h:2633
__hostdev__ ChildIter operator++(int)
Definition: NanoVDB.h:2896
bool FloatType
Definition: NanoVDB.h:817
__hostdev__ const uint32_t & activeTileCount(uint32_t level) const
Return the total number of active tiles at the specified level of the tree.
Definition: NanoVDB.h:2497
C++11 implementation of std::is_floating_point.
Definition: Util.h:332
__hostdev__ FloatType getDev() const
Definition: NanoVDB.h:4197
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:3358
__hostdev__ const RootT & root() const
Definition: NanoVDB.h:4835
static void * memzero(void *dst, size_t byteCount)
Zero initialization of memory.
Definition: Util.h:297
uint64_t mValueCount
Definition: NanoVDB.h:1572
__hostdev__ DataType * data()
Definition: NanoVDB.h:3464
const typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1763
__hostdev__ const BlindDataT * getBlindData() const
Get a const pointer to the blind data represented by this meta data.
Definition: NanoVDB.h:1653
__hostdev__ void setAvg(const StatsT &v)
Definition: NanoVDB.h:3256
__hostdev__ ValueT getValue(uint32_t n) const
Definition: NanoVDB.h:3224
static DstT * PtrAdd(void *p, int64_t offset)
Adds a byte offset to a non-const pointer to produce another non-const pointer.
Definition: Util.h:512
__hostdev__ void setValue(const CoordT &ijk, const ValueType &v)
Sets the value at the specified location and activate its state.
Definition: NanoVDB.h:4459
__hostdev__ ValueOnIter & operator++()
Definition: NanoVDB.h:2957
__hostdev__ float getAvg() const
return the quantized average of the active values in this node
Definition: NanoVDB.h:3770
Class that encapsulates two CRC32 checksums, one for the Grid, Tree and Root node meta data and one f...
Definition: NanoVDB.h:1817
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:5345
__hostdev__ uint64_t activeVoxelCount() const
Return a const reference to the index bounding box of all the active values in this tree...
Definition: NanoVDB.h:2490
__hostdev__ ValueType operator()(const CoordType &ijk) const
Definition: NanoVDB.h:4846
__hostdev__ const GridClass & gridClass() const
Definition: NanoVDB.h:5507
__hostdev__ float getMax() const
return the quantized maximum of the active values in this node
Definition: NanoVDB.h:3767
typename ChildT::ValueType ValueT
Definition: NanoVDB.h:3159
__hostdev__ bool getDev() const
Definition: NanoVDB.h:3986
Implements Tree::getDim(math::Coord)
Definition: NanoVDB.h:1780
Definition: NanoVDB.h:2857
__hostdev__ const ValueType & background() const
Return the total number of active voxels in the root and all its child nodes.
Definition: NanoVDB.h:3033
__hostdev__ DenseIterator beginDense() const
Definition: NanoVDB.h:3455
Codec
Define compression codecs.
Definition: NanoVDB.h:5791
__hostdev__ Vec3T applyIJT(const Vec3T &xyz) const
Definition: NanoVDB.h:2013
__hostdev__ uint32_t countOn() const
Return the total number of set bits in this Mask.
Definition: NanoVDB.h:1062
uint8_t mFlags
Definition: NanoVDB.h:3737
__hostdev__ bool isChild(uint32_t n) const
Definition: NanoVDB.h:3236
Internal nodes of a VDB tree.
Definition: NanoVDB.h:3271
__hostdev__ ValueOnIterator()
Definition: NanoVDB.h:4249
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:4195
__hostdev__ ConstDenseIterator cbeginChildAll() const
Definition: NanoVDB.h:3012
static __hostdev__ T * alignPtr(T *p)
offset the specified pointer so it is 32 byte aligned. Works with both const and non-const pointers...
Definition: NanoVDB.h:590
__hostdev__ bool isOn() const
Return true if all the bits are set in this Mask.
Definition: NanoVDB.h:1223
__hostdev__ ConstTileIterator probe(const CoordT &ijk) const
Definition: NanoVDB.h:2769
ValueT ValueType
Definition: NanoVDB.h:5245
__hostdev__ ValueType getValue(int i, int j, int k) const
Definition: NanoVDB.h:4845
BuildT TreeType
Definition: NanoVDB.h:2129
Base-class for quantized float leaf nodes.
Definition: NanoVDB.h:3728
math::BBox< CoordT > mBBox
Definition: NanoVDB.h:3177
__hostdev__ const LeafNodeType * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:3069
__hostdev__ void setMin(const ValueT &v)
Definition: NanoVDB.h:2818
__hostdev__ Vec3T worldToIndexF(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2220
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3979
const typename GridOrTreeOrRootT::RootNodeType::ChildNodeType::ChildNodeType Type
Definition: NanoVDB.h:1733
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4133
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:3447
__hostdev__ bool isCached(const CoordType &ijk) const
Definition: NanoVDB.h:4946
__hostdev__ Vec3T worldToIndex(const Vec3T &xyz) const
world to index space transformation
Definition: NanoVDB.h:2197
Definition: NanoVDB.h:2369
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:4179
C++11 implementation of std::is_same.
Definition: Util.h:314
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:5263
const std::enable_if<!VecTraits< T >::IsVec, T >::type & max(const T &a, const T &b)
Definition: Composite.h:110
static __hostdev__ uint64_t memUsage()
Definition: NanoVDB.h:3744
__hostdev__ void setValue(uint32_t offset, bool v)
Definition: NanoVDB.h:3987
__hostdev__ bool isActive() const
Return true if this node or any of its child nodes contain active values.
Definition: NanoVDB.h:3566
__hostdev__ ValueType getValue(const CoordType &ijk) const
Return the value of the given voxel (regardless of state or location in the tree.) ...
Definition: NanoVDB.h:2468
__hostdev__ uint64_t getMin() const
Definition: NanoVDB.h:4113
Struct with all the member data of the InternalNode (useful during serialization of an openvdb Intern...
Definition: NanoVDB.h:3157
static __hostdev__ constexpr int64_t memUsage()
Definition: NanoVDB.h:3843
__hostdev__ const NanoGrid< Point > & grid() const
Definition: NanoVDB.h:5631
TileT * mPos
Definition: NanoVDB.h:2693
static __hostdev__ constexpr uint64_t memUsage()
Definition: NanoVDB.h:3874
const typename GridT::TreeType Type
Definition: NanoVDB.h:2413
Dummy type for a 4bit quantization of float point values.
Definition: NanoVDB.h:184
__hostdev__ bool operator!=(const Checksum &rhs) const
return true if the checksums are not identical
Definition: NanoVDB.h:1887
uint32_t Type
Definition: NanoVDB.h:6184
__hostdev__ uint64_t gridSize() const
Return memory usage in bytes for this class only.
Definition: NanoVDB.h:2158
__hostdev__ Version version() const
Definition: NanoVDB.h:5537
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:3162
uint64_t mCRC64
Definition: NanoVDB.h:1823
__hostdev__ uint64_t & full()
Definition: NanoVDB.h:1855
__hostdev__ void setMin(const ValueType &)
Definition: NanoVDB.h:4042
Return point to the lower internal node where math::Coord maps to one of its values, i.e. terminates.
Definition: NanoVDB.h:6210
static __hostdev__ bool hasStats()
Definition: NanoVDB.h:3981
const typename GridT::TreeType type
Definition: NanoVDB.h:2414
__hostdev__ NodeTrait< RootT, 1 >::type * getFirstLower()
Definition: NanoVDB.h:2562
__hostdev__ ValueType operator()(int i, int j, int k) const
Definition: NanoVDB.h:4959
__hostdev__ FloatType variance() const
Return the variance of all the active values encoded in this internal node and any of its child nodes...
Definition: NanoVDB.h:3495
__hostdev__ void setBlindData(const void *blindData)
Definition: NanoVDB.h:1630
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:3241
__hostdev__ uint64_t voxelPoints(const Coord &ijk, const AttT *&begin, const AttT *&end) const
get iterators over attributes to points at a specific voxel location
Definition: NanoVDB.h:5656
uint8_t mFlags
Definition: NanoVDB.h:3974
T type
Definition: Util.h:408
__hostdev__ uint64_t getAvg() const
Definition: NanoVDB.h:4135
__hostdev__ void setBBoxOn(bool on=true)
Definition: NanoVDB.h:1993
__hostdev__ bool isUnknown() const
Definition: NanoVDB.h:2264
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:2641
__hostdev__ uint32_t head() const
Definition: NanoVDB.h:1856
T type
Definition: NanoVDB.h:507
__hostdev__ ValueIterator & operator++()
Definition: NanoVDB.h:4343
__hostdev__ bool setName(const char *name)
Sets the name string.
Definition: NanoVDB.h:1638
__hostdev__ uint32_t blindDataCount() const
Return true if this grid is empty, i.e. contains no values or nodes.
Definition: NanoVDB.h:2298
__hostdev__ void setChild(uint32_t n, const void *ptr)
Definition: NanoVDB.h:3199
__hostdev__ Vec3T applyInverseJacobian(const Vec3T &xyz) const
Definition: NanoVDB.h:2011
__hostdev__ bool operator==(const Version &rhs) const
Definition: NanoVDB.h:730
Struct with all the member data of the Grid (useful during serialization of an openvdb grid) ...
Definition: NanoVDB.h:1921
typename ChildT::template MaskType< LOG2 > MaskType
Definition: NanoVDB.h:3283
auto callNanoGrid(GridDataT *gridData, ArgsT &&...args)
Below is an example of the struct used for generic programming with callNanoGrid. ...
Definition: NanoVDB.h:4725
Implements Tree::isActive(math::Coord)
Definition: NanoVDB.h:1778
__hostdev__ Vec3T applyInverseMapF(const Vec3T &xyz) const
Apply the inverse affine mapping to a vector using 32bit floating point arithmetics.
Definition: NanoVDB.h:1495
Definition: NanoVDB.h:758
__hostdev__ bool probeValue(const CoordT &ijk, ValueType &v) const
Return true if the voxel value at the given coordinate is active and updates v with the value...
Definition: NanoVDB.h:4482
__hostdev__ NodeTrait< RootT, 2 >::type * getFirstUpper()
Definition: NanoVDB.h:2564
__hostdev__ void toggle()
brief Toggle the state of all bits in the mask
Definition: NanoVDB.h:1310
__hostdev__ bool isPointIndex() const
Definition: NanoVDB.h:2260
__hostdev__ void setMax(const ValueType &)
Definition: NanoVDB.h:4200
uint32_t mData0
Definition: NanoVDB.h:1939
__hostdev__ ValueType operator*() const
Definition: NanoVDB.h:4260
Dummy type for indexing points into voxels.
Definition: NanoVDB.h:196
__hostdev__ const MaskType< LOG2DIM > & getValueMask() const
Definition: NanoVDB.h:3476
__hostdev__ const void * blindData() const
returns a const void point to the blind data
Definition: NanoVDB.h:1642
__hostdev__ ValueType getValue(const CoordT &ijk) const
Return the voxel value at the given coordinate.
Definition: NanoVDB.h:4449
static __hostdev__ size_t memUsage()
Return memory usage in bytes for the class.
Definition: NanoVDB.h:3472
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:3316
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:3161
Definition: NanoVDB.h:916
__hostdev__ bool isActive(const CoordType &ijk) const
Return the active state of the given voxel (regardless of state or location in the tree...
Definition: NanoVDB.h:2472
__hostdev__ const ChildT * getChild(uint32_t n) const
Definition: NanoVDB.h:3218
uint32_t findHighestOn(uint32_t v)
Returns the index of the highest, i.e. most significant, on bit in the specified 32 bit word...
Definition: Util.h:606
__hostdev__ uint64_t activeVoxelCount() const
Definition: NanoVDB.h:5531
bool Type
Definition: NanoVDB.h:527
__hostdev__ const ChildT * probeChild(const CoordT &ijk) const
Definition: NanoVDB.h:2794
Definition: NanoVDB.h:1114
__hostdev__ ValueType getFirstValue() const
If the first entry in this node&#39;s table is a tile, return the tile&#39;s value. Otherwise, return the result of calling getFirstValue() on the child.
Definition: NanoVDB.h:3505
StatsT mAverage
Definition: NanoVDB.h:3184
__hostdev__ float getValue(uint32_t i) const
Definition: NanoVDB.h:3882
Definition: NanoVDB.h:2978
__hostdev__ const Map & map() const
Definition: NanoVDB.h:5526
__hostdev__ ValueIterator cbeginValueAll() const
Definition: NanoVDB.h:3377
CoordT mBBoxMin
Definition: NanoVDB.h:3735
__hostdev__ NodeT & operator*() const
Definition: NanoVDB.h:2888
typename ChildT::CoordType CoordT
Definition: NanoVDB.h:2601
__hostdev__ uint64_t getMax() const
Definition: NanoVDB.h:4114
__hostdev__ ValueType getValue(uint32_t offset) const
Return the voxel value at the given offset.
Definition: NanoVDB.h:4446
__hostdev__ ValueIter & operator++()
Definition: NanoVDB.h:2924
typename GridTree< GridT >::type GridTreeT
Definition: NanoVDB.h:2418
MaskT mValueMask
Definition: NanoVDB.h:3179
NANOVDB_HOSTDEV_DISABLE_WARNING __hostdev__ uint32_t findNext(uint32_t start) const
Definition: NanoVDB.h:1362
__hostdev__ CoordType getOrigin() const
Definition: NanoVDB.h:2870
__hostdev__ uint32_t totalNodeCount() const
Definition: NanoVDB.h:2516
uint16_t mMin
Definition: NanoVDB.h:3742
typename ChildT::FloatType StatsT
Definition: NanoVDB.h:2602
typename GridOrTreeOrRootT::RootNodeType::ChildNodeType Type
Definition: NanoVDB.h:1740
__hostdev__ Vec3d voxelSize() const
Definition: NanoVDB.h:5529
typename FloatTraits< ValueType >::FloatType FloatType
Definition: NanoVDB.h:4158
__hostdev__ const ValueT & getMin() const
Definition: NanoVDB.h:2813
GridMetaData(const GridData *gridData)
Definition: NanoVDB.h:5477
const typename GridOrTreeOrRootT::LeafNodeType Type
Definition: NanoVDB.h:1718
__hostdev__ DataType * data()
Definition: NanoVDB.h:2150
MaskT< LOG2DIM > mValues
Definition: NanoVDB.h:3976
This is a convenient class that allows for access to grid meta-data that are independent of the value...
Definition: NanoVDB.h:5461
__hostdev__ TileIterator beginTile()
Definition: NanoVDB.h:2758
__hostdev__ int findBlindData(const char *name) const
Return the index of the first blind data with specified name if found, otherwise -1.
Definition: NanoVDB.h:2349
__hostdev__ uint32_t gridCount() const
Definition: NanoVDB.h:5524
void writeUncompressedGrid(StreamT &os, const GridData *gridData, const ValueT *blindData, GridBlindDataSemantic semantic=GridBlindDataSemantic::Unknown, bool raw=false)
Write an IndexGrid to a stream and append blind data.
Definition: NanoVDB.h:5930
uint32_t mTableSize
Definition: NanoVDB.h:2630
typename BuildT::BuildType BuildType
Definition: NanoVDB.h:2137
typename T::ValueType ElementType
Definition: NanoVDB.h:778
__hostdev__ bool isMask() const
Definition: NanoVDB.h:5514
__hostdev__ uint64_t memUsage() const
return memory usage in bytes for the leaf node
Definition: NanoVDB.h:4437
__hostdev__ bool isSequential() const
return true if the specified node type is laid out breadth-first in memory and has a fixed size...
Definition: NanoVDB.h:2275
Definition: NanoVDB.h:4224
typename RootT::CoordType CoordType
Definition: NanoVDB.h:2440
float type
Definition: NanoVDB.h:563
defines a tree type from a grid type while preserving constness
Definition: NanoVDB.h:2405
__hostdev__ bool probeValue(const CoordType &ijk, ValueType &v) const
Definition: NanoVDB.h:5135
__hostdev__ GridType mapToGridType()
Definition: NanoVDB.h:886
__hostdev__ uint32_t nodeCount(int level) const
Definition: NanoVDB.h:2510
__hostdev__ ChannelT & operator()(int i, int j, int k) const
Definition: NanoVDB.h:5745
__hostdev__ AccessorType getAccessor() const
Return a new instance of a ReadAccessor used to access values in this grid.
Definition: NanoVDB.h:2187
Visits child nodes of this node only.
Definition: NanoVDB.h:3297
__hostdev__ Coord offsetToGlobalCoord(uint32_t n) const
Definition: NanoVDB.h:3558
typename remove_const< T >::type type
Definition: Util.h:461
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:4030
__hostdev__ void setValue(uint32_t offset, uint16_t value)
Definition: NanoVDB.h:4187
__hostdev__ Checksum()
default constructor initiates checksum to EMPTY
Definition: NanoVDB.h:1831
static __hostdev__ constexpr uint32_t padding()
Definition: NanoVDB.h:3844
__hostdev__ ValueIterator(const InternalNode *parent)
Definition: NanoVDB.h:3352
typename Mask< 3 >::template Iterator< ON > MaskIterT
Definition: NanoVDB.h:4240
GridType mDataType
Definition: NanoVDB.h:1576
Leaf nodes of the VDB tree. (defaults to 8x8x8 = 512 voxels)
Definition: NanoVDB.h:4221
__hostdev__ bool isActive(const CoordType &ijk) const
Definition: NanoVDB.h:4961
__hostdev__ DataType * data()
Definition: NanoVDB.h:3022
__hostdev__ const uint64_t & valueCount() const
Return total number of values indexed by the IndexGrid.
Definition: NanoVDB.h:5723
__hostdev__ NodeTrait< RootT, LEVEL >::type * getFirstNode()
return a pointer to the first node at the specified level
Definition: NanoVDB.h:2545
typename util::match_const< Tile, DataT >::type TileT
Definition: NanoVDB.h:2691
__hostdev__ bool isValue() const
Definition: NanoVDB.h:2664
__hostdev__ Vec3T worldToIndexDir(const Vec3T &dir) const
transformation from world space direction to index space direction
Definition: NanoVDB.h:2211
__hostdev__ DenseIterator cbeginChildAll() const
Definition: NanoVDB.h:3456
BuildT BuildType
Definition: NanoVDB.h:5036
__hostdev__ uint32_t rootTableSize() const
return the root table has size
Definition: NanoVDB.h:2099
bool FloatType
Definition: NanoVDB.h:3968
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:4479
double mTaperD
Definition: NanoVDB.h:1404
__hostdev__ CoordType getCoord() const
Definition: NanoVDB.h:3452
uint32_t dim
Definition: NanoVDB.h:6287
__hostdev__ const ValueType & maximum() const
Return a const reference to the maximum active value encoded in this root node and any of its child n...
Definition: NanoVDB.h:3043
MaskT mChildMask
Definition: NanoVDB.h:3180
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:4469
__hostdev__ Version()
Default constructor.
Definition: NanoVDB.h:714
__hostdev__ void setMinMaxOn(bool on=true)
Definition: NanoVDB.h:1992
static __hostdev__ uint32_t valueCount()
Definition: NanoVDB.h:4109
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:3660
__hostdev__ const Tile * tile(uint32_t n) const
Returns a pointer to the tile at the specified linear offset.
Definition: NanoVDB.h:2676
__hostdev__ const StatsT & average() const
Definition: NanoVDB.h:3243
__hostdev__ ValueType getFirstValue() const
Return the first value in this leaf node.
Definition: NanoVDB.h:4452
__hostdev__ ValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:3411
typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1754
typename NanoLeaf< BuildT >::ValueType ValueT
Definition: NanoVDB.h:6156
__hostdev__ ValueOnIterator(const InternalNode *parent)
Definition: NanoVDB.h:3391
__hostdev__ ConstValueOnIterator cbeginValueOn() const
Definition: NanoVDB.h:2975
Definition: NanoVDB.h:2646
__hostdev__ ReadAccessor(const GridT &grid)
Constructor from a grid.
Definition: NanoVDB.h:4921
typename BuildToValueMap< BuildT >::Type ValueT
Definition: NanoVDB.h:6250
FloatType mAverage
Definition: NanoVDB.h:3664
__hostdev__ TileIter(DataT *data, uint32_t pos=0)
Definition: NanoVDB.h:2697
BuildT BuildType
Definition: NanoVDB.h:4807
ValueT ValueType
Definition: NanoVDB.h:5037
__hostdev__ const ChildNodeType * probeChild(const CoordType &ijk) const
Definition: NanoVDB.h:3529
float Type
Definition: NanoVDB.h:534
typename UpperNodeType::ChildNodeType LowerNodeType
Definition: NanoVDB.h:2842
StatsT mStdDevi
Definition: NanoVDB.h:2636
__hostdev__ void setOrigin(const T &ijk)
Definition: NanoVDB.h:4089
__hostdev__ const DataType * data() const
Definition: NanoVDB.h:2152
__hostdev__ uint32_t & head()
Definition: NanoVDB.h:1857
ValueT value
Definition: NanoVDB.h:3168
static __hostdev__ constexpr uint32_t padding()
Return padding of this class in bytes, due to aliasing and 32B alignment.
Definition: NanoVDB.h:4175
__hostdev__ bool hasMinMax() const
Definition: NanoVDB.h:5516
CoordBBox indexBBox
Definition: NanoVDB.h:5850
__hostdev__ uint32_t rootTableSize() const
Definition: NanoVDB.h:5535
__hostdev__ TileIter & operator++()
Definition: NanoVDB.h:2708
__hostdev__ bool isCached1(const CoordType &ijk) const
Definition: NanoVDB.h:5112
__hostdev__ bool isActive(uint32_t n) const
Definition: NanoVDB.h:3230
__hostdev__ ValueOnIterator beginValueOn()
Definition: NanoVDB.h:2974
__hostdev__ const ChildT * getChild(const Tile *tile) const
Definition: NanoVDB.h:2807
__hostdev__ bool isEmpty() const
return true if the 64 bit checksum is disables (unset)
Definition: NanoVDB.h:1870
__hostdev__ Iterator(uint32_t pos, const Mask *parent)
Definition: NanoVDB.h:1088
__hostdev__ ReadAccessor(const RootT &root)
Constructor from a root node.
Definition: NanoVDB.h:5043
__hostdev__ ValueType getMax() const
Definition: NanoVDB.h:3689
typename GridOrTreeOrRootT::RootNodeType type
Definition: NanoVDB.h:1755
__hostdev__ void * nodePtr()
Return a non-const void pointer to the first node at LEVEL.
Definition: NanoVDB.h:2047
__hostdev__ float getMin() const
return the quantized minimum of the active values in this node
Definition: NanoVDB.h:3764
__hostdev__ const LeafT * probeLeaf(const CoordType &ijk) const
Definition: NanoVDB.h:4963
__hostdev__ ValueOffIterator()
Definition: NanoVDB.h:4282
ChildT ChildNodeType
Definition: NanoVDB.h:3279
typename DataType::BuildT BuildType
Definition: NanoVDB.h:3277
__hostdev__ ValueOffIterator cbeginValueOff() const
Definition: NanoVDB.h:4306
typename DataType::ValueType ValueType
Definition: NanoVDB.h:4232
float type
Definition: NanoVDB.h:556
__hostdev__ uint32_t getMajor() const
Definition: NanoVDB.h:736
__hostdev__ Vec3T indexToWorldGradF(const Vec3T &grad) const
Transforms the gradient from index space to world space.
Definition: NanoVDB.h:2239
__hostdev__ const NodeTrait< TreeT, LEVEL >::type * getNode() const
Definition: NanoVDB.h:5293
__hostdev__ bool hasBBox() const
Definition: NanoVDB.h:2266
uint64_t type
Definition: NanoVDB.h:570
__hostdev__ FloatType getAvg() const
Definition: NanoVDB.h:4196
typename ChildT::LeafNodeType LeafNodeType
Definition: NanoVDB.h:3278
typename NanoNode< BuildT, LEVEL >::type NanoNodeT
Definition: NanoVDB.h:4654
__hostdev__ void setValue(const CoordType &k, bool s, const ValueType &v)
Definition: NanoVDB.h:2656
__hostdev__ auto getNodeInfo(const CoordType &ijk) const
Definition: NanoVDB.h:5342
__hostdev__ const uint64_t * words() const
Definition: NanoVDB.h:1172
__hostdev__ const GridBlindMetaData & blindMetaData(uint32_t n) const
Definition: NanoVDB.h:2332
static __hostdev__ uint32_t bitCount()
Return the number of bits available in this Mask.
Definition: NanoVDB.h:1056
__hostdev__ void setDev(const bool &)
Definition: NanoVDB.h:3996
__hostdev__ Vec3T indexToWorldDir(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2206
__hostdev__ const void * getRoot() const
Get a const void pointer to the root node (never NULL)
Definition: NanoVDB.h:2386
__hostdev__ const StatsT & stdDeviation() const
Definition: NanoVDB.h:3244
__hostdev__ bool isActive() const
Return true if any of the voxel value are active in this leaf node.
Definition: NanoVDB.h:4472
GridBlindDataClass
Blind-data Classes that are currently supported by NanoVDB.
Definition: NanoVDB.h:403
MaskT< LOG2DIM > mValueMask
Definition: NanoVDB.h:4165
__hostdev__ const void * treePtr() const
Definition: NanoVDB.h:2030
static __hostdev__ size_t memUsage()
Return the memory footprint in bytes of this Mask.
Definition: NanoVDB.h:1053
const typename GridOrTreeOrRootT::RootNodeType Type
Definition: NanoVDB.h:1762
Visits all active values in a leaf node.
Definition: NanoVDB.h:4243
__hostdev__ const LeafNodeType * getFirstLeaf() const
Definition: NanoVDB.h:2561
__hostdev__ Vec3T indexToWorldDirF(const Vec3T &dir) const
transformation from index space direction to world space direction
Definition: NanoVDB.h:2229