Sparse Grids
- class fvdb.Grid(*, impl: GridBatch)[source]
A single sparse voxel grid with support for efficient operations.
A
Gridrepresents a single sparse 3D voxel grid that can be processed efficiently on a GPU. The class provides methods for common operations like sampling, convolution, pooling, dilation, union, etc. It also provides more advanced features such as marching cubes, TSDF fusion, and fast ray marching.A
Griddoes not store data itself, but rather the structure (or topology) of the sparse voxel grid. Voxel data (e.g., features, colors, densities) are stored separately astorch.Tensorassociated with the grid. This separation allows for flexibility in the type and number of channels of data with which a grid can be used to index into. This also allows multiple grids to share the same data storage if desired.When using a
Grid’s voxel coordinates, there are three important coordinate systems to be aware of:World Space: The continuous 3D coordinate system in which the grid exists.
Voxel Space: The discrete voxel index system, where each voxel is identified by its integer indices (i, j, k).
Index Space: The linear indexing of active voxels in the grid’s internal storage.
At its core, a
Griduses a very fast mapping from voxel space into index space to perform operations on atorch.Tensorof data associated with the grid. This mapping allows for efficient access and manipulation of voxel data. For example:voxel_coords = torch.tensor([[8, 7, 6], [1, 2, 3], [4, 5, 6]], device="cuda") # Voxel space coordinates # Create a Grid containing the voxels (8, 7, 6), (1, 2, 3), and (4, 5, 6) such that the voxels # have a world space size of 1x1x1, and where the [0, 0, 0] voxel in voxel space is at world space origin (0, 0, 0). grid = Grid.from_ijk(voxel_coords, voxel_size=1.0, origin=0.0, device="cuda") # Create some data associated with the grid - here we have 3 voxels and 2 channels per voxel voxel_data = torch.randn(grid.num_voxels, 2, device="cuda") # Index space data # Map voxel space coordinates to index space indices = grid.ijk_to_index(voxel_coords) # Shape: (3,) # Access the data for the specified voxel coordinates selected_data = voxel_data[indices] # Shape: (3, 2)
Note
The grid is stored in a sparse format using NanoVDB where only active (non-empty) voxels are allocated, making it extremely memory efficient for representing large volumes with sparse occupancy.
Note
A
Gridcannot be a nonexistent (grid_count==0) grid, for that you’d need aGridBatchwith batch_size=0. However, aGridcan have zero voxels.Note
The
Gridconstructor is for internal use only, To create aGridwith actual content, use the classmethods:from_dense()for dense datafrom_dense_axis_aligned_bounds()for dense defined by axis-aligned boundsfrom_grid_batch()for a single grid from a grid batchfrom_ijk()for voxel coordinatesfrom_mesh()for triangle meshesfrom_nearest_voxels_to_points()for nearest voxel mappingfrom_points()for point cloudsfrom_zero_voxels()for a single grid with zero voxels
- property address: int
The address of the underlying C++ NanoVDB grid object.
- Returns:
address (int) – The memory address of the underlying C++ NanoVDB grid.
- avg_pool(pool_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, data: Tensor, stride: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, coarse_grid: Grid | None = None) tuple[Tensor, Grid][source]
Apply average pooling to the given data associated with this
Gridreturned as data associated with the givencoarse_gridor a newly created coarseGrid.Performs average pooling on the voxel data, reducing the resolution by the specified
pool_factor. Each output voxel contains the average of the corresponding input voxels within the pooling window. The pooling operation respects the sparse structure of this.Gridand the givencoarse_grid.Note
If you pass
coarse_grid = None, the returned coarse grid will have its voxel size multiplied by thepool_factorand its origin adjusted accordingly.Note
This method supports backpropagation through the pooling operation.
- Parameters:
pool_factor (NumericMaxRank1) – The factor by which to downsample the grid, broadcastable to shape
(3,), integer dtypedata (torch.Tensor) – The voxel data to pool. A
torch.Tensorwith shape(total_voxels, channels).stride (NumericMaxRank1) – The stride to use when pooling. If
0(default), broadcastable to shape(3,), integer dtypecoarse_grid (Grid, optional) – Pre-allocated coarse grid to use for output. If
None, a newGridis created.
- Returns:
pooled_data (torch.Tensor) – A tensor containing the pooled voxel data with shape
(coarse_total_voxels, channels).coarse_grid (Grid) – A
Gridobject representing the coarse grid topology after pooling. Matches the providedcoarse_gridif given.
- property bbox: Tensor
The voxel-space bounding box of this
Grid.Note
The bounding box is inclusive of the minimum voxel and the maximum voxel.
e.g. if you have a grid with a single voxel at index
(0, 0, 0), the bounding box will be[[0, 0, 0], [0, 0, 0]].e.g. if you have a grid with voxels at indices
(0, 0, 0)and(1, 1, 1), the bounding box will be[[0, 0, 0], [1, 1, 1]].- Returns:
bbox (torch.Tensor) – A
(2, 3)-shaped tensor representing the minimum and maximum voxel indices of the bounding box. If the grid has zero voxels, returns a zero tensor.
- clip(features: Tensor, ijk_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, ijk_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size) tuple[Tensor, Grid][source]
Creates a new
Gridcontaining only the voxels that fall within the specified bounding box range[ijk_min, ijk_max], and returns the corresponding clipped features.Note
This method supports backpropagation through the clipping operation.
- Parameters:
features (torch.Tensor) – The voxel features to clip. A
torch.Tensorwith shape(total_voxels, channels).ijk_min (NumericMaxRank1) – Minimum bounds in index space, broadcastable to shape
(3,), integer dtypeijk_max (NumericMaxRank1) – Maximum bounds in index space, broadcastable to shape
(3,), integer dtype
- Returns:
clipped_features (torch.Tensor) – A tensor containing the clipped voxel features with shape
(clipped_total_voxels, channels).clipped_grid (Grid) – A new
Gridobject containing only the voxels within the specified bounds.
- clipped_grid(ijk_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, ijk_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size) Grid[source]
Return a new
Gridrepresenting the clipped version of this grid. Each voxel[i, j, k]in the input grid is included in the output if it lies withinijk_minandijk_max.- Parameters:
ijk_min (NumericMaxRank1) – Index space minimum bound of the clip region, broadcastable to shape
(3,), integer dtypeijk_max (NumericMaxRank1) – Index space maximum bound of the clip region, broadcastable to shape
(3,), integer dtype
- Returns:
clipped_grid (Grid) – A
Gridrepresenting the clipped version of this grid.
- coarsened_grid(coarsening_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size) Grid[source]
Return a
Gridrepresenting the coarsened version of this grid.- Parameters:
coarsening_factor (NumericMaxRank1) – The factor by which to coarsen the grid, broadcastable to shape
(3,), integer dtype- Returns:
coarsened_grid (Grid) – A
Gridrepresenting the coarsened version of this grid.
- contiguous() Grid[source]
Return a contiguous copy of the grid.
Note
This is a no-op since a single
Gridis always contiguous. However, this method is provided for API consistency withGridBatch.- Returns:
grid (Grid) – The same
Gridobject.
- conv_grid(kernel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, stride: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1) Grid[source]
Return a
Gridrepresenting the active voxels at the output of a convolution applied to thisGridwith a given kernel.- Parameters:
kernel_size (NumericMaxRank1) – The size of the kernel to convolve with, broadcastable to shape
(3,), integer dtypestride (NumericMaxRank1) – The stride to use when convolving, broadcastable to shape
(3,), integer dtype
- Returns:
conv_grid (Grid) – A
Gridrepresenting the set of voxels in the output of the convolution defined bykernel_sizeandstride.
- coords_in_grid(ijk: Tensor) Tensor[source]
Check if voxel coordinates are in active voxels.
- Parameters:
ijk (torch.Tensor) – Voxel coordinates to check. A
torch.Tensorwith shape(num_queries, 3)and integer coordinates.- Returns:
mask (torch.Tensor) – A Boolean mask indicating which coordinates correspond to active voxels. Shape:
(num_queries,).
- cubes_in_grid(cube_centers: Tensor, cube_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0.0, cube_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0.0) Tensor[source]
Tests whether cubes defined by their centers and bounds are completely inside the active voxels of this
Grid.- Parameters:
cube_centers (torch.Tensor) – Centers of the cubes in world coordinates. A
torch.Tensorwith shape(num_cubes, 3).cube_min (NumericMaxRank1) – Minimum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtypecube_max (NumericMaxRank1) – Maximum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtype
- Returns:
mask (torch.Tensor) – A Boolean mask indicating which cubes are fully contained in the grid. Shape:
(num_cubes,).
- cubes_intersect_grid(cube_centers: Tensor, cube_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0.0, cube_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0.0) Tensor[source]
Tests whether cubes defined by their centers and bounds have any intersection with the active voxels of this
Grid.- Parameters:
cube_centers (torch.Tensor) – Centers of the cubes in world coordinates. A
torch.Tensorwith shape(num_cubes, 3).cube_min (NumericMaxRank1) – Minimum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtypecube_max (NumericMaxRank1) – Maximum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtype
- Returns:
mask (torch.Tensor) – A Boolean mask indicating which cubes intersect the grid. Shape:
(num_cubes,).
- cuda() Grid[source]
Return a copy of this
Gridon a CUDA device, or thisGridif it is already on CUDA.
- property device: device
Return the
torch.devicewhere thisGridis stored.- Returns:
device (torch.device) – The device of the grid.
- dilated_grid(dilation: int) Grid[source]
Return a new
Gridthat is the result of dilating the currentGridby a given number of voxels.- Parameters:
dilation (int) – The dilation radius in voxels.
- Returns:
grid (Grid) – A new
Gridwith dilated active regions.
- property dual_bbox: Tensor
Return the voxel space bounding box of the dual of this
Grid. i.e. the bounding box of theGridwhose voxel centers correspond to voxel corners in thisGrid.See also
bbox()for the bounding box of thisGrid, anddual_grid()for computing the dual grid itself.Note
The bounding box is inclusive of the minimum voxel and the maximum voxel.
e.g. if you have a grid with a single voxel at index
(0, 0, 0), the dual grid will contain voxels at indices(0, 0, 0), (0, 0, 1), (0, 1, 0), ..., (1, 1, 1), and the bounding box will be[[0, 0, 0], [1, 1, 1]].- Returns:
dual_bbox (torch.Tensor) – A
(2, 3)-shaped tensor representing the minimum and maximum voxel indices of the dual bounding box. If the grid has zero voxels, returns a zero tensor.
- dual_grid(exclude_border: bool = False) Grid[source]
Return a new
Gridwhose voxel centers correspond to the corners of thisGrid.The dual grid is useful for staggered grid discretizations and finite difference operations.
- Parameters:
exclude_border (bool) – If
True, excludes border voxels that would extend beyond the primal grid bounds. Default isFalse.- Returns:
grid (Grid) – A new
Gridrepresenting the dual grid.
- classmethod from_dense(dense_dims: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, ijk_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, voxel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, mask: Tensor | None = None, device: str | device | None = None) Grid[source]
A dense grid has a voxel for every coordinate in an axis-aligned box.
The dense grid is defined by:
dense_dims: the size of the dense grid (shape
[3,] = [W, H, D])ijk_min: the minimum voxel index for the grid (shape
[3,] = [i_min, j_min, k_min])voxel_size: the world-space size of each voxel (shape
[3,] = [sx, sy, sz])origin: the world-space coordinate of the center of the
[0,0,0]voxel of the grid (shape[3,] = [x0, y0, z0])mask: indicates which voxels are “active” in the resulting grid.
- Parameters:
dense_dims (NumericMaxRank1) – Dimensions of the dense grid, broadcastable to shape
(3,), integer dtypeijk_min (NumericMaxRank1) – Minimum voxel index for the grid, broadcastable to shape
(3,), integer dtypevoxel_size (NumericMaxRank1) – World space size of each voxel, broadcastable to shape
(3,), floating dtypeorigin (NumericMaxRank1) – World space coordinate of the center of the
[0,0,0]voxel of the grid, broadcastable to shape(3,), floating dtypemask (torch.Tensor | None) – Mask to apply to the grid, a
torch.Tensorwith shape(W, H, D)and boolean dtype.device (DeviceIdentifier | None) – Device to create the grid on. Defaults to
None, which inherits the device frommask, or uses"cpu"ifmaskisNone.
- Returns:
grid (Grid) – A new
Gridobject.
- classmethod from_dense_axis_aligned_bounds(dense_dims: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, bounds_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, bounds_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, voxel_center: bool = False, device: str | device = 'cpu') Grid[source]
Create a dense grid defined by axis-aligned bounds in world space.
The grid has voxels spanning
dense_dimswith the voxel size and origin set to fit within the specified axis-aligned bounding box defined bybounds_minandbounds_max.If
voxel_centerisTrue, the bounds correspond to the centers of the corner voxels. Ifvoxel_centerisFalse, the bounds correspond to the outer edges of the corner voxels.- Parameters:
dense_dims (NumericMaxRank1) – Dimensions of the dense grid, broadcastable to shape
(3,), integer dtypebounds_min (NumericMaxRank1) – Minimum world space bounds of the grid, broadcastable to shape
(3,), floating dtypebounds_max (NumericMaxRank1) – Maximum world space bounds of the grid, broadcastable to shape
(3,), floating dtypevoxel_center (bool) – Whether the bounds correspond to voxel centers (
True) or edges (False). Defaults toFalse.device (DeviceIdentifier) – Device to create the grid on. Defaults to
"cpu".
- Returns:
grid (Grid) – A new
Gridobject.
- classmethod from_grid_batch(grid_batch: GridBatch, index: int = 0) Grid[source]
Extract a
Gridfrom one grid in aGridBatch. Ifindexexceeds the number of grids in the batch (minus one), an error is raised.Note
The resulting
Gridwill share the same underlying data as theGridBatch, but have different metadata. Thus,is_contiguous()will returnFalseon the resultingGridif theGridBatchcontains multiple grids.
- classmethod from_ijk(ijk: Tensor, voxel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, device: str | device | None = None) Grid[source]
Create a grid from voxel coordinates. If multiple voxels map to the same coordinate, only one voxel will be created at that coordinate.
- Parameters:
ijk (torch.Tensor) – Voxel coordinates to populate. A
torch.Tensorwith shape(num_voxels, 3)with integer coordinates.voxel_size (NumericMaxRank1) – Size of each voxel, broadcastable to shape
(3,), floating dtypeorigin (NumericMaxRank1) – Origin of the grid. i.e. the world-space position of the center of the
[0,0,0]voxel, broadcastable to shape(3,), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid on. Defaults to None, which inherits the device of
ijk.
- Returns:
grid (Grid) – A new
Gridobject.
- classmethod from_mesh(mesh_vertices: Tensor, mesh_faces: Tensor, voxel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, device: str | device | None = None) Grid[source]
Create a new
Gridby voxelizing the surface of a triangle mesh. i.e voxels that intersect the surface of the mesh will be contained in the resultingGrid.Note
This method works well but will be made much faster and memory efficient in the next release.
- Parameters:
mesh_vertices (torch.Tensor) – Vertices of the mesh. A
torch.Tensorwith shape(num_vertices, 3).mesh_faces (torch.Tensor) – Faces of the mesh. A
torch.Tensorwith shape(num_faces, 3).voxel_size (NumericMaxRank1) – Size of each voxel, broadcastable to shape
(3,), floating dtypeorigin (NumericMaxRank1) – Origin of the grid. i.e. the world-space position of the center of the
[0,0,0]voxel, broadcastable to shape(3,), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid on. Defaults to
None, which inherits the device ofmesh_vertices.
- Returns:
grid (Grid) – A new
Gridobject with voxels covering the surface of the input mesh.
- classmethod from_nanovdb(path: Path | str, *, device: str | device = 'cpu', verbose: bool = False) tuple[Grid, Tensor, str][source]
- classmethod from_nanovdb(path: Path | str, *, index: int, device: str | device = 'cpu', verbose: bool = False) tuple[Grid, Tensor, str]
- classmethod from_nanovdb(path: Path | str, *, name: str, device: str | device = 'cpu', verbose: bool = False) tuple[Grid, Tensor, str]
Load a
Gridfrom a .nvdb file.- Parameters:
path (str) – The path to the .nvdb file to load
index (int | None) – Optional single index to load from the file (mutually exclusive with other selectors)
name (str | None) – Optional single name to load from the file (mutually exclusive with other selectors)
device (DeviceIdentifier) – Which device to load the grid on
verbose (bool) – If set to true, print information about the loaded grid
- Returns:
grid (Grid) – The loaded
Grid.data (torch.Tensor) – A
torch.Tensorcontaining the data associated with the grid, with shape(grid.num_voxels, channels*).name (str) – The name of the loaded grid.
- classmethod from_nearest_voxels_to_points(points: Tensor, voxel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, device: str | device | None = None) Grid[source]
Create a grid by adding the eight nearest voxels to every point in a point cloud.
- Parameters:
points (torch.Tensor) – Points to populate the grid from. A
torch.Tensorwith shape(num_points, 3).voxel_size (NumericMaxRank1) – Size of each voxel, broadcastable to shape
(3,), floating dtypeorigin (NumericMaxRank1) – Origin of the grid, broadcastable to shape
(3,), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid on. Defaults to
None, which inherits the device ofpoints.
- Returns:
grid (Grid) – A new
Gridobject.
- classmethod from_points(points: Tensor, voxel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, device: str | device | None = None) Grid[source]
Create a grid from a point cloud.
- Parameters:
points (torch.Tensor) – Points to populate the grid from. A
torch.Tensorwith shape(num_points, 3).voxel_size (NumericMaxRank1) – Size of each voxel, broadcastable to shape
(3,), floating dtypeorigin (NumericMaxRank1) – Origin of the grid, broadcastable to shape
(3,), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid on. Defaults to
None, which inherits the device ofpoints.
- Returns:
grid (Grid) – A new
Gridobject.
- classmethod from_zero_voxels(device: str | device = 'cpu', voxel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) Grid[source]
Create a new
Gridwith zero voxels on a specific device.- Parameters:
device – The device to create the Grid on. Can be a string (e.g., “cuda”, “cpu”) or a
torch.deviceobject. Defaults to"cpu".voxel_size (NumericMaxRank1) – Size of each voxel, broadcastable to shape
(3,), floating dtype. Defaults to1.origin (NumericMaxRank1) – Origin of the grid, broadcastable to shape
(3,), floating dtype. Defaults to0.
- Returns:
grid (Grid) – A new
Gridobject with zero voxels.
Examples:
grid = Grid.from_zero_voxels("cuda", 1, 0) # string grid = Grid.from_zero_voxels(torch.device("cuda:0"), 1, 0) # device directly grid = Grid.from_zero_voxels(voxel_size=1, origin=0) # defaults to CPU
- has_same_address_and_grid_count(other: Any) bool[source]
Check if this
Gridhas the same address and grid count as anotherGrid.Note
This method is primarily for internal use to compare grids efficiently.
- property has_zero_voxels: bool
Trueif thisGridhas zero active voxels,Falseotherwise.- Returns:
has_zero_voxels (bool) – Whether the grid has zero active voxels.
- hilbert(offset: Tensor | None = None) Tensor[source]
Return Hilbert curve codes for active voxels in this grid.
Hilbert curves provide better spatial locality than Morton codes by ensuring that nearby points in 3D space are also nearby in the 1D curve ordering.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
torch.Tensor – A tensor of shape [num_voxels, 1] containing the Hilbert codes for each active voxel.
- hilbert_zyx(offset: Tensor | None = None) Tensor[source]
Return transposed Hilbert curve codes for active voxels in this grid.
Transposed Hilbert curves use zyx ordering instead of xyz. This variant can provide better spatial locality for certain access patterns.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
torch.Tensor – A tensor of shape [num_voxels, 1] containing the transposed Hilbert codes for each active voxel.
- property ijk: Tensor
The voxel coordinates of every active voxel in this
Grid, in index order.- Returns:
ijk (torch.Tensor) – A
(num_voxels, 3)-shaped tensor containing the voxel coordinates of each active voxel in index order.
- ijk_to_index(ijk: Tensor) Tensor[source]
Convert grid-space coordinates to linear index space.
Maps 3D grid-space coordinates to their corresponding linear indices. Returns
-1for coordinates that don’t correspond to active voxels.- Parameters:
ijk (torch.Tensor) – Voxel coordinates to convert. A
torch.Tensorwith shape(num_queries, 3)with integer coordinates.- Returns:
index (torch.Tensor) – Linear indices for each coordinate, or
-1if not active. Shape:(num_queries,).
- ijk_to_inv_index(ijk: Tensor) Tensor[source]
Get inverse permutation of
ijk_to_index(). i.e. for each voxel each index in the grid, return the index in the inputijktensor.Example:
# Create three ijk coordinates ijk_coords = torch.tensor([[100,0,10],[1024,1,1],[2,222,2]]) # Create a grid with 3 voxels at those coordinates grid = Grid.from_ijk(ijk_coords) # Get the index coordinates of the three voxels # Returns [0, 2, 1] meaning # [100,0,10] is voxel 0 in the grid # [1024,1,1] is voxel 2 in the grid # [2,222,2] is voxel 1 in the grid index_coords = grid.ijk_to_index(ijk_coords) # Now let's say you have another set of ijk coordinates query_ijk = torch.tensor([[2,222,2],[100,0,10], [50,50,50], [70, 0, 70]]) # Returns [1, 0, -1, -1] meaning # the voxel in grid's index 0 maps to query_ijk index 1 # the voxel in grid's index 1 maps to query_ijk index 0 # the voxel in grid's index 2 does not exist in query_ijk, so -1 # the voxel in grid's index 3 does not exist in query_ijk, so -1 inv_index = grid.ijk_to_inv_index(query_ijk)
- Parameters:
ijk (torch.Tensor) – Voxel coordinates to convert. A
torch.Tensorwith shape (num_queries, 3) with integer coordinates.- Returns:
inv_map (torch.Tensor) – Inverse permutation for ijk_to_index. A
torch.Tensorwith shape (num_queries,).
- inject_from(src_grid: Grid, src: Tensor, dst: Tensor | None = None, default_value: float | int | bool = 0) Tensor[source]
Inject data associated with the source grid to a
torch.Tensorassociated with this grid.Note
The copy occurs in voxel space, the voxel-to-world transform is not applied.
Note
If you pass in destination data,
dst, thendstwill be modified in-place. IfdstisNone, a newtorch.Tensorwill be created with the shape(self.num_voxels, *src.shape[1:])and filled withdefault_valuefor any voxels that do not have corresponding data insrc.Note
This method supports backpropagation through the injection operation.
- Parameters:
src (torch.Tensor) – Source data associated with
src_grid. This must be a Tensor with shape(src_grid.num_voxels, *).dst (torch.Tensor | None) – Optional destination data to be modified in-place. This must be a Tensor with shape
(self.num_voxels, *)orNone.default_value (float | int | bool) – Value to fill in for voxels that do not have corresponding data in
src. This is used only ifdstisNone. Default is0.
- Returns:
dst (torch.Tensor) – The data copied from
srcdata after injection.
- inject_from_dense_cmajor(dense_data: Tensor, dense_origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) Tensor[source]
Inject values from a dense
torch.Tensorinto atorch.Tensorassociated with thisGrid.This is the “C Major” (channels major) version, which assumes the
dense_datais in CXYZ order. i.e the dense tensor has shape[channels*, dense_size_x, dense_size_y, dense_size_z].Note
This method supports backpropagation through the read operation.
See also
inject_from_dense_cminor()for the “C Minor” (channels minor) version, which assumes thedense_datais in XYZC order.See also
inject_to_dense_cmajor()for writing data to a dense tensor in “C Major” order.- Parameters:
dense_data (torch.Tensor) – Dense
torch.Tensorto read from. Shape:(channels*, dense_size_x, dense_size_y, dense_size_z).dense_origin (NumericMaxRank1, optional) – Origin of the dense tensor in voxel space, broadcastable to shape
(3,), integer dtype
- Returns:
sparse_data (torch.Tensor) – Values from the dense tensor at voxel locations active in this
Grid. Shape:(self.num_voxels, channels*).
- inject_from_dense_cminor(dense_data: Tensor, dense_origin: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) Tensor[source]
Inject values from a dense
torch.Tensorinto atorch.Tensorassociated with thisGrid.This is the “C Minor” (channels minor) version, which assumes the
dense_datais in XYZC order. i.e the dense tensor has shape[dense_size_x, dense_size_y, dense_size_z, channels*].Note
This method supports backpropagation through the read operation.
See also
inject_from_dense_cmajor()for the “C Major” (channels major) version, which assumes thedense_datais in CXYZ order.See also
inject_to_dense_cminor()for writing data to a dense tensor in C Minor order.- Parameters:
dense_data (torch.Tensor) – Dense
torch.Tensorto read from. Shape:(dense_size_x, dense_size_y, dense_size_z, channels*).dense_origin (NumericMaxRank1, optional) – Origin of the dense tensor in voxel space, broadcastable to shape
(3,), integer dtype
- Returns:
sparse_data (torch.Tensor) – Values from the dense tensor at voxel locations active in this
Grid. Shape:(self.num_voxels, channels*).
- inject_from_ijk(src_ijk: Tensor, src: Tensor, dst: Tensor | None = None, default_value: float | int | bool = 0)[source]
Inject data associated with a set of source voxel coordinates to a
torch.Tensorassociated with this grid.Note
If you pass in destination data,
dst, thendstwill be modified in-place. IfdstisNone, a newtorch.Tensorwill be created with the shape(self.num_voxels, *src.shape[1:])and filled withdefault_valuefor any voxels that do not have corresponding data insrc.Note
This method supports backpropagation through the injection operation.
- Parameters:
src_ijk (torch.Tensor) – Source voxel coordinates associated with
src. Atorch.Tensorwith shape(num_src_voxels, 3)and integer coordinates.src (torch.Tensor) – Data from the source ijk coordinates
src_ijk. Atorch.Tensorwith shape(src_ijk.shape[0], *).dst (torch.Tensor | None) – Optional destination data to be modified in-place. This must be a Tensor with shape
(self.num_voxels, *)orNone.default_value (float | int | bool) – Value to fill in for voxels that do not have corresponding data in
src. This is used only ifdstisNone. Default is 0.
- inject_to(dst_grid: Grid, src: Tensor, dst: Tensor | None = None, default_value: float | int | bool = 0) Tensor[source]
Inject data associated with this
Gridto data associated withdst_grid.Note
If you pass in destination data,
dst, thendstwill be modified in-place. IfdstisNone, a newtorch.Tensorwill be created with the shape(dst_grid.num_voxels, *src.shape[1:])and filled withdefault_valuefor any voxels that do not have corresponding data insrc.Note
This method supports backpropagation through the injection operation.
- Parameters:
src (torch.Tensor) – Source data from associated with this
Grid. This must be a Tensor with shape(self.num_voxels, *).dst (torch.Tensor | None) – Optional destination data to be modified in-place. This must be a Tensor with shape
(dst_grid.num_voxels, *)orNone.default_value (float | int | bool) – Value to fill in for voxels that do not have corresponding data in
src. This is used only ifdstisNone. Default is 0.
- Returns:
dst (torch.Tensor) – The destination data associated with
dst_griddata after injection.
- inject_to_dense_cmajor(sparse_data: Tensor, min_coord: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None, grid_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) Tensor[source]
Write values from a
torch.Tensorassociated with thisGridinto a densetorch.Tensor.This is the “C Major” (channels major) version, which assumes the
dense_datais in CXYZ order. i.e the dense tensor has shape[channels*, dense_size_x, dense_size_y, dense_size_z].This method creates the dense tensor to return, and fills it with values from the sparse grid within the range defined by
min_coordandgrid_size. Voxels not present in the sparse grid are filled with zeros. .i.e. this method will copy all the voxel values in the range[min_coord, min_coord + grid_size)into a dense tensor of shape[channels*, dense_size_x, dense_size_y, dense_size_z], such thatmin_coordmaps to index(0, 0, 0)in the dense tensor, andmin_coord + grid_size - 1maps to index(dense_size_x - 1, dense_size_y - 1, dense_size_z - 1)in the dense tensor.Note
This method supports backpropagation through the write operation.
See also
inject_from_dense_cmajor()for reading from a dense tensor in “C Major” order, which assumes the dense tensor has shape[channels*, dense_size_x, dense_size_y, dense_size_z].See also
inject_to_dense_cminor()for writing to a dense tensor in “C Minor” order.- Parameters:
sparse_data (torch.Tensor) – A
torch.Tensorof data associated with thisGridwith shape(self.num_voxels, channels*).min_coord (NumericMaxRank1|None) – Minimum voxel coordinate to read from the
Gridinto the output dense tensor, broadcastable to shape(3,), integer dtype, orNone. If set toNone, this will be the minimum voxel coordinate of thisGrid’s bounding box.grid_size (NumericMaxRank1|None) – Size of the output dense tensor, broadcastable to shape
(3,), integer dtype, orNone. IfNone, computed to fit all active voxels starting frommin_coord. i.e. ifmin_coordis(2, 2, 2)and the maximum active voxel in the grid is(5, 5, 5), the computedgrid_sizewill be(4, 4, 4).
- Returns:
dense_data (torch.Tensor) – Dense
torch.Tensorcontaining the sparse data with shape(channels*, dense_size_x, dense_size_y, dense_size_z).
- inject_to_dense_cminor(sparse_data: Tensor, min_coord: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None, grid_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) Tensor[source]
Write values from a
torch.Tensorassociated with thisGridinto a densetorch.Tensor.This is the “C Minor” (channels minor) version, which assumes the
dense_datais in XYZC order. i.e the dense tensor has shape[dense_size_x, dense_size_y, dense_size_z, channels*].This method creates the dense tensor to return, and fills it with values from the sparse grid within the range defined by
min_coordandgrid_size. Voxels not present in the sparse grid are filled with zeros. .i.e. this method will copy all the voxel values in the range[min_coord, min_coord + grid_size)into a dense tensor of shape[dense_size_x, dense_size_y, dense_size_z, channels*], such thatmin_coordmaps to index(0, 0, 0)in the dense tensor, andmin_coord + grid_size - 1maps to index(dense_size_x - 1, dense_size_y - 1, dense_size_z - 1)in the dense tensor.Note
This method supports backpropagation through the write operation.
See also
inject_from_dense_cminor()for reading from a dense tensor in “C Minor” order, which assumes the dense tensor has shape[dense_size_x, dense_size_y, dense_size_z, channels*].See also
inject_to_dense_cmajor()for writing to a dense tensor in “C Major” order.- Parameters:
sparse_data (torch.Tensor) – A
torch.Tensorof data associated with thisGridwith shape(self.num_voxels, channels*).min_coord (NumericMaxRank1|None) – Minimum voxel coordinate to read from the
Gridinto the output dense tensor, broadcastable to shape(3,), integer dtype, orNone. If set toNone, this will be the minimum voxel coordinate of thisGrid’s bounding box.grid_size (NumericMaxRank1|None) – Size of the output dense tensor, broadcastable to shape
(3,), integer dtype, orNone. IfNone, computed to fit all active voxels starting frommin_coord. i.e. ifmin_coordis(2, 2, 2)and the maximum active voxel in the grid is(5, 5, 5), the computedgrid_sizewill be(4, 4, 4).
- Returns:
dense_data (torch.Tensor) – Dense
torch.Tensorcontaining the sparse data with shape(dense_size_x, dense_size_y, dense_size_z, channels*).
- integrate_tsdf(truncation_distance: float, projection_matrix: Tensor, cam_to_world_matrix: Tensor, tsdf: Tensor, weights: Tensor, depth_image: Tensor, weight_image: Tensor | None = None) tuple[Grid, Tensor, Tensor][source]
Integrate depth images into a Truncated Signed Distance Function (TSDF) volume.
Updates the given TSDF values and weights associated with this
Gridby integrating new depth observations from a given camera viewpoint. This is commonly used for 3D reconstruction from RGB-D sensors.See also
integrate_tsdf_with_features()for integrating features along with TSDF values.- Parameters:
truncation_distance (float) – Maximum distance to truncate TSDF values (in world units).
projection_matrix (torch.Tensor) – Camera projection matrix. A tensor-like object with
shape: (3, 3).cam_to_world_matrix (torch.Tensor) – Camera to world transformation matrix. A tensor-like object with
shape: (4, 4).tsdf (torch.Tensor) – Current TSDF values for each voxel. A
torch.Tensorwith shape:(self.num_voxels, 1).weights (torch.Tensor) – Current integration weights for each voxel. A
torch.Tensorwith shape:(self.num_voxels, 1).depth_image (torch.Tensor) – Depth image from cameras. A
torch.Tensorwith shape:(height, width).weight_image (torch.Tensor, optional) – Weight of each depth sample in the image. A
torch.Tensorwith shape:(height, width). If None, defaults to uniform weights.
- Returns:
new_grid (Grid) – Updated
Gridwith potentially expanded voxels.new_tsdf (torch.Tensor) – Updated TSDF values as a
torch.Tensorassociated withnew_grid.new_weights (torch.Tensor) – Updated weights as a
torch.Tensorassociated withnew_grid.
- integrate_tsdf_with_features(truncation_distance: float, projection_matrix: Tensor, cam_to_world_matrix: Tensor, tsdf: Tensor, features: Tensor, weights: Tensor, depth_image: Tensor, feature_image: Tensor, weight_image: Tensor | None = None) tuple[Grid, Tensor, Tensor, Tensor][source]
Integrate depth and feature images into a Truncated Signed Distance Function (TSDF) volume.
Updates the given TSDF values and weights associated with this
Gridby integrating new depth observations from a given camera viewpoint. This is commonly used for 3D reconstruction from RGB-D sensors.See also
integrate_tsdf()for integrating without features along with TSDF values.- Parameters:
truncation_distance (float) – Maximum distance to truncate TSDF values (in world units).
projection_matrix (torch.Tensor) – Camera projection matrix. A tensor-like object with
shape: (3, 3).cam_to_world_matrix (torch.Tensor) – Camera to world transformation matrix. A tensor-like object with
shape: (4, 4).features (torch.Tensor) – Current feature values associated with each voxel in this
Grid. Atorch.Tensorwith shape(total_voxels, feature_dim).tsdf (torch.Tensor) – Current TSDF values for each voxel. A
torch.Tensorwith shape:(self.num_voxels, 1).weights (torch.Tensor) – Current integration weights for each voxel. A
torch.Tensorwith shape:(self.num_voxels, 1).depth_image (torch.Tensor) – Depth image from cameras. A
torch.Tensorwith shape:(height, width).feature_image (torch.Tensor) – Feature image (e.g., RGB) from cameras. A
torch.Tensorwith shape:(height, width, feature_dim).weight_image (torch.Tensor, optional) – Weight of each depth sample in the image. A
torch.Tensorwith shape:(height, width). If None, defaults to uniform weights.
- Returns:
new_grid (Grid) – Updated
Gridwith potentially expanded voxels.new_tsdf (torch.Tensor) – Updated TSDF values as a
torch.Tensorassociated withnew_gridwith shape(new_grid.num_voxels, 1).new_features (torch.Tensor) – Updated features as a
torch.Tensorassociated withnew_gridwith shape(new_grid.num_voxels, feature_dim).new_weights (torch.Tensor) – Updated weights as a
torch.Tensorassociated withnew_gridwith shape(new_grid.num_voxels, 1).
- is_contiguous() bool[source]
Check if the grid data is stored contiguously in memory. This is generally
TrueforGridsince it represents a single grid, though can beFalseif you constructed theGridusingfrom_grid_batch()on aGridBatchwith more than one grid.- Returns:
is_contiguous (bool) –
Trueif all the data for this grid is stored contiguously in memory,Falseotherwise.
- marching_cubes(field: Tensor, level: float = 0.0) tuple[Tensor, Tensor, Tensor][source]
Extract an isosurface mesh over data associated with this
Gridusing the marching cubes algorithm. Generates a triangle mesh representing the isosurface at the specified level from a scalar field defined on the voxels.- Parameters:
field (torch.Tensor) – Scalar field values at each voxel in this
Grid. Atorch.Tensorwith shape(total_voxels, 1).level (float) – The isovalue to extract the surface at. Default is 0.0.
- Returns:
vertex_positions (torch.Tensor) – Vertex positions of the mesh. Shape:
(num_vertices, 3).face_indices (torch.Tensor) – Triangle face indices. Shape:
(num_faces, 3).vertex_normals (torch.Tensor) – Vertex normals (computed from gradients). Shape:
(num_vertices, 3).
- max_pool(pool_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, data: Tensor, stride: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, coarse_grid: Grid | None = None) tuple[Tensor, Grid][source]
Apply max pooling to the given data associated with this
Gridreturned as data associated with the givencoarse_gridor a newly created coarseGrid.Performs max pooling on the voxel data, reducing the resolution by the specified
pool_factor. Each output voxel contains the max of the corresponding input voxels within the pooling window. The pooling operation respects the sparse structure of this.Gridand the givencoarse_grid.Note
If you pass
coarse_grid = None, the returned coarse grid will have its voxel size multiplied by thepool_factorand its origin adjusted accordingly.Note
This method supports backpropagation through the pooling operation.
- Parameters:
pool_factor (NumericMaxRank1) – The factor by which to downsample the grid, broadcastable to shape
(3,), integer dtypedata (torch.Tensor) – The voxel data to pool. A
torch.Tensorwith shape(total_voxels, channels).stride (NumericMaxRank1) – The stride to use when pooling. If
0(default), broadcastable to shape(3,), integer dtypecoarse_grid (Grid, optional) – Pre-allocated coarse grid to use for output. If
None, a newGridis created.
- Returns:
pooled_data (torch.Tensor) – A tensor containing the pooled voxel data with shape
(coarse_total_voxels, channels).coarse_grid (Grid) – A
Gridobject representing the coarse grid topology after pooling. Matches the providedcoarse_gridif given.
- merged_grid(other: Grid) Grid[source]
Return a new
Gridthat is the union of thisGridwith another. The voxel-to-world transform of the resulting grid matches that of thisGrid.
- morton(offset: Tensor | None = None) Tensor[source]
Return Morton codes (Z-order curve) for active voxels in this grid.
Morton codes use xyz bit interleaving to create a space-filling curve that preserves spatial locality. This is useful for serialization, sorting, and spatial data structures.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
torch.Tensor – A tensor of shape [num_voxels, 1] containing the Morton codes for each active voxel.
- morton_zyx(offset: Tensor | None = None) Tensor[source]
Return transposed Morton codes (Z-order curve) for active voxels in this grid.
Transposed Morton codes use zyx bit interleaving to create a space-filling curve. This variant can provide better spatial locality for certain access patterns.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
torch.Tensor – A tensor of shape [num_voxels, 1] containing the transposed Morton codes for each active voxel.
- neighbor_indexes(ijk: Tensor, extent: int, bitshift: int = 0) Tensor[source]
Get indexes of neighboring voxels in this
Gridin an N-ring neighborhood of each voxel coordinate inijk.- Parameters:
ijk (torch.Tensor) – Voxel coordinates to find neighbors for. A
torch.Tensorwith shape(num_queries, 3)with integer coordinates.extent (int) – Size of the neighborhood ring (N-ring).
bitshift (int) – An optional bit shift value to provide to each input ijk coordinate. i.e passing
bitshift = 2is the same as callingneighbor_indexes(ijk << 2, extent). Default is 0.
- Returns:
neighbor_indexes (torch.Tensor) – A
torch.Tensorof shape(num_queries, N)containing the linear indexes of neighboring voxels for each voxel coordinate inijkin the input. If some neighbors are not active in the grid, their indexes will be-1.
- property num_bytes: int
The size in bytes this
Gridoccupies in memory.- Returns:
num_bytes (int) – The size in bytes of the grid.
- property num_leaf_nodes: int
The number of leaf nodes in the NanoVDB underlying this
Grid.- Returns:
num_leaf_nodes (int) – The number of leaf nodes in the grid.
- property num_voxels: int
The number of active voxels in this
Grid.- Returns:
num_voxels (int) – The number of active voxels in the grid.
- property origin: Tensor
The world-space origin of this
Grid. i.e. the world-space position of the center of the voxel at(0, 0, 0)in voxel space.- Returns:
origin (torch.Tensor) – A
(3,)-shaped tensor representing the world-space origin.
- points_in_grid(points: Tensor) Tensor[source]
Check if world-space points are located within active voxels. This method applies the world-to-voxel transform of this
Gridto each point, then checks if the resulting voxel coordinates correspond to active voxels.- Parameters:
points (torch.Tensor) – World-space points to test. A
torch.Tensorwith shape(num_queries, 3).- Returns:
mask (torch.Tensor) – A Boolean mask indicating which points are in active voxels. Shape:
(num_queries,).
- pruned_grid(mask: Tensor) Grid[source]
Return a new
Gridwhere voxels are pruned based on a boolean mask.Truevalues in the mask indicate that the corresponding voxel should be kept, whileFalsevalues indicate that the voxel should be removed.- Parameters:
mask (torch.Tensor) – Boolean mask for each voxel. A
torch.Tensorwith shape(self.num_voxels,).- Returns:
pruned_grid (Grid) – A new
Gridcontaining only voxels at indices wheremaskis True.
- ray_implicit_intersection(ray_origins: Tensor, ray_directions: Tensor, grid_scalars: Tensor, eps: float = 0.0) Tensor[source]
Find ray intersections with an implicit surface defined on the voxels of this
Grid.Note
The implicit surface is defined by the zero level-set of the scalar field provided in
grid_scalars.Note
The intersection distances are returned as multiples of the ray direction length. If the ray direction is normalized, the distances correspond to Euclidean distances.
- Parameters:
ray_origins (torch.Tensor) – Starting points of rays in world space. A
torch.Tensorwith shape(num_rays, 3).ray_directions (torch.Tensor) – Direction vectors of rays. A
torch.Tensorwith shape:(num_rays, 3). Note that the intersection distances are returned as a multiple of the ray direction length.grid_scalars (torch.Tensor) – Scalar field values at each voxel. A
torch.Tensorwith shape:(total_voxels, 1).eps (float) – Epsilon value which can improve numerical stability. Default is
0.0.
- Returns:
intersection_distances (torch.Tensor) – Intersection distance along each input ray of the zero-level set of the input or -1 if no intersection occurs. A
torch.Tensorwith shape(num_rays,).
- rays_intersect_voxels(ray_origins: Tensor, ray_directions: Tensor, eps: float = 0.0) Tensor[source]
Given a set of rays, return a boolean
torch.Tensorindicating which rays intersect thisGrid.- Parameters:
ray_origins (torch.Tensor) – an
(N, 3)-shaped tensor of ray originsray_directions (torch.Tensor) – an
(N, 3)-shaped tensor of ray directionseps (float) – a small value which can help with numerical stability. Default is
0.0.
- Returns:
rays_intersect (torch.Tensor) – a boolean
torch.Tensorof shape(N,)indicating which rays intersect the grid. i.e.rays_intersect_voxels(ray_origins, ray_directions, eps)[i]isTrueif the ray corresponding toray_origins[i], ray_directions[i]intersects with this Grid.
- refine(subdiv_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, data: Tensor, mask: Tensor | None = None, fine_grid: Grid | None = None) tuple[Tensor, Grid][source]
Refine data associated with this
Gridinto a higher-resolution grid by subdividing each voxel. i.e for each voxel,(i, j, k)in thisGrid, copy the data associated with that voxel to the voxels(subdiv_factor[0]*i + di, subdiv_factor[1]*j + dj, subdiv_factor[2]*k + dk)fordi, dj, dkin{0, ..., subdiv_factor - 1}in the output data associated withfine_grid, if the that voxel exists.Note
If you pass
fine_grid = None, this method will create a new fineGridwith its voxel size divided by thesubdiv_factorand its origin adjusted accordingly.Note
You can skip copying data at certain voxels in this
Gridby passing a booleanmaskof shape(self.num_voxels,). Only data at voxels corresponding toTruevalues in the mask will be refined.Note
This method supports backpropagation through the refinement operation.
See also
refined_grid()for obtaining a refined version of the grid structure without refining data.- Parameters:
subdiv_factor (NumericMaxRank1) – Refinement factor between this
Gridand the fine grid, broadcastable to shape(3,), integer dtypedata (torch.Tensor) – Voxel data to refine. A
torch.Tensorof shape(total_voxels, channels).mask (torch.Tensor, optional) – Boolean mask of shape
(self.num_voxels,)``indicating which voxels in the input :class:`Grid` to refine. If ``None, data associated with all input voxels are refined.fine_grid (Grid, optional) – Pre-allocated fine
Gridto use for output. IfNone, a newGridis created.
- Returns:
tuple[torch.Tensor, Grid] – A tuple containing: - The refined data as a torch.Tensor - The fine Grid containing the refined structure
- refined_grid(subdiv_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, mask: Tensor | None = None) Grid[source]
Return a refined version of this
Grid. i.e each voxel in thisGridis subdivided by the specifiedsubdiv_factorto create a higher-resolution grid.Note
You can skip refining certain voxels in this
Gridby passing a booleanmaskof shape(self.num_voxels,). Only voxels corresponding toTruevalues in the mask will be refined.
- sample_bezier(points: Tensor, voxel_data: Tensor) Tensor[source]
Sample data in a
torch.Tensorassociated with thisGridat world-space points using Bézier interpolation.This method uses Bézier interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers.
Note
This method supports backpropagation through the interpolation operation.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grid return zero.
See also
sample_trilinear()for trilinear interpolation.See also
sample_bezier_with_grad()for Bézier interpolation which also returns spatial gradients.- Parameters:
points (torch.Tensor) – World-space points to sample at. A
torch.Tensorof shape:(num_queries, 3).voxel_data (torch.Tensor) – Data associated with each voxel in this
Grid. Atorch.Tensorof shape(total_voxels, channels*).
- Returns:
interpolated_data (torch.Tensor) – Interpolated data at each point. Shape:
(num_queries, channels*).
- sample_bezier_with_grad(points: Tensor, voxel_data: Tensor) tuple[Tensor, Tensor][source]
Sample data in a
torch.Tensorassociated with thisGridat world-space points using Bézier interpolation, and return the sampled values and their spatial gradients at those points.This method uses Bézier interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers. It returns both the interpolated data and the gradients of the interpolated data with respect to the world coordinates.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grid return zero.
Note
This method supports backpropagation through the interpolation operation.
See also
sample_bezier()for Bézier interpolation without gradients.See also
sample_trilinear_with_grad()for trilinear interpolation with spatial gradients.- Parameters:
points (torch.Tensor) – World-space points to sample at. A
torch.Tensorof shape:(num_queries, 3).voxel_data (torch.Tensor) – Data associated with each voxel in this
Grid. Atorch.Tensorof shape(total_voxels, channels*).
- Returns:
interpolated_data (torch.Tensor) – Interpolated data at each point. Shape:
(num_queries, channels*).interpolation_gradients (torch.Tensor) – Gradients of the interpolated data with respect to world coordinates. This is the spatial gradient of the Bézier interpolation at each point. Shape:
(num_queries, 3, channels*).
- sample_trilinear(points: Tensor, voxel_data: Tensor) Tensor[source]
Sample data in a
torch.Tensorassociated with thisGridat world-space points using trilinear interpolation.This method uses trilinear interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers.
Note
This method supports backpropagation through the interpolation operation.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grid return zero.
See also
sample_bezier()for Bézier interpolation.See also
sample_trilinear_with_grad()for trilinear interpolation which also returns spatial gradients.- Parameters:
points (torch.Tensor) – World-space points to sample at. A
torch.Tensorof shape:(num_queries, 3).voxel_data (torch.Tensor) – Data associated with each voxel in this
Grid. Atorch.Tensorof shape(total_voxels, channels*).
- Returns:
interpolated_data (torch.Tensor) – Interpolated data at each point. Shape:
(num_queries, channels*).
- sample_trilinear_with_grad(points: Tensor, voxel_data: Tensor) tuple[Tensor, Tensor][source]
Sample data in a
torch.Tensorassociated with thisGridat world-space points using trilinear interpolation, and return the sampled values and their spatial gradients at those points.This method uses trilinear interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers. It returns both the interpolated data and the gradients of the interpolated data with respect to the world coordinates.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grid return zero.
Note
This method supports backpropagation through the interpolation operation.
See also
sample_trilinear()for trilinear interpolation without gradients.See also
sample_bezier_with_grad()for Bézier interpolation with spatial gradients.- Parameters:
points (torch.Tensor) – World-space points to sample at. A
torch.Tensorof shape:(num_queries, 3).voxel_data (torch.Tensor) – Data associated with each voxel in this
Grid. Atorch.Tensorof shape(total_voxels, channels*).
- Returns:
interpolated_data (torch.Tensor) – Interpolated data at each point. Shape:
(num_queries, channels*).interpolation_gradients (torch.Tensor) – Gradients of the interpolated data with respect to world coordinates. This is the spatial gradient of the trilinear interpolation at each point. Shape:
(num_queries, 3, channels*).
- save_nanovdb(path: str | Path, data: Tensor | None = None, name: str | None = None, compressed: bool = False, verbose: bool = False) None[source]
Save this
Gridand optional data associated with it to a .nvdb file.The grid is saved in the NanoVDB format, which can be loaded by other applications that support OpenVDB/NanoVDB.
- Parameters:
path (str | pathlib.Path) – The file path to save to. Should have .nvdb extension.
data (torch.Tensor, optional) – Voxel data to save with the grid. Shape:
(self.num_voxels, channels). IfNone, only grid structure is saved.name (str, optional) – Optional name for the grid
compressed (bool) – Whether to compress the data using Blosc compression. Default is
False.verbose (bool) – Whether to print information about the saved grid. Default is
False.
- segments_along_rays(ray_origins: Tensor, ray_directions: Tensor, max_segments: int, eps: float = 0.0) JaggedTensor[source]
Return segments of continuous ray traversal through this
Grid. Each segment is represented by its start and end distance along the ray. i.e. for each ray, the output contains a variable number of segments, each defined by a pair of distances(t_start, t_end), wheret_startis the distance when the ray goes from being outside thisGridto inside, andt_endis the distance when the ray exits the grid.- Parameters:
ray_origins (torch.Tensor) – Starting points of rays in world space. A
torch.Tensorwith shape(num_rays, 3).ray_directions (torch.Tensor) – Direction vectors of rays. A
torch.Tensorwith shape:(num_rays, 3). Note that the intersection distances are returned as a multiple of the ray direction length.max_segments (int) – Maximum number of segments to return per-ray.
eps (float) – Small epsilon value which can help with numerical stability. Default is
0.0.
- Returns:
segments (JaggedTensor) – A
JaggedTensorcontaining the segments along each ray. The JaggedTensor has shape:(num_rays, num_segments_per_ray, 2), wherenum_segments_per_rayvaries per ray up tomax_segments. Each segment is represented by a pair of distances(t_start, t_end).
- sparse_conv_halo(input: Tensor, weight: Tensor, variant: int = 8) Tensor[source]
Perform sparse convolution on an input
torch.Tensorassociated with thisGridusing halo exchange optimization to efficiently handle boundary conditions in distributed or multi-block sparse grids.Note
Halo convolution only supports convolving when the input and output grid topology match, thus this method does not accept an output grid. i.e. the output features will be associated with this
Grid.- Parameters:
input (torch.Tensor) – Input features for each voxel in this
Grid. Shape:(self.num_voxels, in_channels).weight (torch.Tensor) – Convolution weights. Shape
(out_channels, in_channels, kernel_size_x, kernel_size_y, kernel_size_z).variant (int) – Variant of the halo implementation to use. Default is 8. Note: This is cryptic on purpose and you should change it only if you know what you’re doing.
- Returns:
out_features (torch.Tensor) – Output features with shape
(self.num_voxels, out_channels)after convolution.
- splat_bezier(points: Tensor, points_data: Tensor) Tensor[source]
Splat data at a set of input points into a
torch.Tensorassociated with thisGridusing Bézier interpolation. i.e. each point distributes its data to the surrounding voxels using cubic Bézier interpolation weights.Note
This method assumes that the voxel data is defined at the centers of voxels.
Note
This method supports backpropagation through the splatting operation.
- Parameters:
points (torch.Tensor) – World-space positions of points used to splat data. Shape:
(num_points, 3).points_data (torch.Tensor) – Data associated with each point to splat into the grid. Shape:
(num_points, channels*).
- Returns:
splatted_features (torch.Tensor) – Accumulated features at each voxel after splatting. Shape:
(self.num_voxels, channels*).
- splat_trilinear(points: Tensor, points_data: Tensor) Tensor[source]
Splat data at a set of input points into a
torch.Tensorassociated with thisGridusing trilinear interpolation. i.e. each point distributes its data to the surrounding voxels using trilinear interpolation weights.Note
This method assumes that the voxel data is defined at the centers of voxels.
Note
This method supports backpropagation through the splatting operation.
- Parameters:
points (torch.Tensor) – World-space positions of points used to splat data. Shape:
(num_points, 3).points_data (torch.Tensor) – Data associated with each point to splat into the grid. Shape:
(num_points, channels*).
- Returns:
splatted_features (torch.Tensor) – Accumulated features at each voxel after splatting. Shape:
(self.num_voxels, channels*).
- to(target: str | device | Tensor | JaggedTensor | Grid) Grid[source]
Move this
Gridto a target device or to match the device of an object (e.g. anotherGrid, aJaggedTensor, atorch.Tensor, etc.).- Parameters:
target (str | torch.device | torch.Tensor | JaggedTensor | Grid) – Target object to determine the device.
- Returns:
grid (Grid) – A new
Gridon the target device or thisGridif the target device is the same asself.device.
- uniform_ray_samples(ray_origins: Tensor, ray_directions: Tensor, t_min: Tensor, t_max: Tensor, step_size: float, cone_angle: float = 0.0, include_end_segments: bool = True, return_midpoints: bool = False, eps: float = 0.0) JaggedTensor[source]
Generate uniformly spaced samples along rays intersecting this
Grid.This method creates sample points at regular intervals along rays, but only for segments that intersect with active voxels. The uniform samples start at
ray_origins + ray_directions * t_minand end atray_origins + ray_directions * t_max, with spacing defined bystep_size, and only include samples which lie within the grid.If
cone_angleis greater than zero, the method uses cone tracing to adjust the sampling rate based on the distance from the ray origin, allowing for adaptive sampling.Note
The returned samples are represented as a
JaggedTensor, where each element contains either the start and end distance of each sample segment along the ray or the midpoint of each sample segment ifreturn_midpointsisTrue.Note
If
include_end_segmentsisTrue, partial segments at the start and end of each ray that do not fit the fullstep_sizewill be included.- Parameters:
ray_origins (torch.Tensor) – Starting points of rays in world space. A
torch.Tensorwith shape(num_rays, 3).ray_directions (torch.Tensor) – Direction vectors of rays. A
torch.Tensorwith shape:(num_rays, 3). Note that the intersection distances are returned as a multiple of the ray direction length.t_min (torch.Tensor) – Minimum distance along rays to start sampling. A
Tensorof shape(num_rays,).t_max (torch.Tensor) – Maximum distance along rays to stop sampling. A
Tensorof shape(num_rays,).step_size (float) – Distance between samples along each ray.
cone_angle (float) – Cone angle for cone tracing (in radians). Default is 0.0.
include_end_segments (bool) – Whether to include partial segments at ray ends. Default is
True.return_midpoints (bool) – Whether to return segment midpoints instead of start points i.e if this value is
True, the samples will lie halfway between each step. Default isFalse.eps (float) – Epsilon value which can improve numerical stability. Default is
0.0.
- Returns:
samples (JaggedTensor) – A
JaggedTensorcontaining the samples along each ray. TheJaggedTensorhas shape:(num_rays, num_samples_per_ray,), wherenum_samples_per_rayvaries per ray. Each sample insamples[r]is a distance along the rayray_origins + ray_directions * t.
- property voxel_size: Tensor
The world-space size of each voxel in this
Grid.- Returns:
voxel_size (torch.Tensor) – A
(3,)-shaped tensor representing the size of each voxel.
- voxel_to_world(ijk: Tensor) Tensor[source]
Transform a set of voxel-space coordinates to their corresponding positions in world space using this
Grid’s origin and voxel size.See also
world_to_voxel()for the inverse transformation, andvoxel_to_world_matrixandworld_to_voxel_matrixfor the actual transformation matrices.- Parameters:
ijk (torch.Tensor) – A tensor of coordinates to convert. Shape:
(num_points, 3). Can be fractional for interpolation.- Returns:
torch.Tensor – World coordinates. Shape:
(num_points, 3).
- property voxel_to_world_matrix: Tensor
The voxel-to-world transformation matrix for this
Grid, which transforms voxel space coordinates to world space coordinates.- Returns:
voxel_to_world_matrix (torch.Tensor) – A
(4, 4)-shaped tensor representing the voxel-to-world transformation matrix.
- voxels_along_rays(ray_origins: Tensor, ray_directions: Tensor, max_voxels: int, eps: float = 0.0, return_ijk: bool = False) tuple[JaggedTensor, JaggedTensor][source]
Enumerate the indices of voxels in this
Gridintersected by a set of rays in the order of their intersection.Note
If instead of index coordinates, you want voxel coordinates (i.e.
(i, j, k)), setreturn_ijk=True.- Parameters:
ray_origins (torch.Tensor) – Starting points of rays in world space. A
torch.Tensorwith shape(num_rays, 3).ray_directions (torch.Tensor) – Direction vectors of rays. A
torch.Tensorwith shape:(num_rays, 3). Note that the intersection distances are returned as a multiple of the ray direction length.max_voxels (int) – Maximum number of voxels to return per ray.
eps (float) – Small epsilon value which can help with numerical stability. Default is
0.0.return_ijk (bool) – Whether to return voxel coordinates instead of index coordinates. If
False, returns linear indices instead. Default isFalse.
- Returns:
voxels (JaggedTensor) – The voxel indices (or voxel coordinates) intersected by the rays. This is a
JaggedTensorwith shape:(num_rays, num_voxels_per_ray,), wherenum_voxels_per_rayvaries per ray up tomax_voxels. Each element contains either the linear index of the voxel or the(i, j, k)coordinates of the voxel ifreturn_ijk=True. Note: Ifreturn_ijk=True,voxelswill have shape:(num_rays, num_voxels_per_ray, 3).distances (JaggedTensor) – The entry and exit distances along each ray for each intersected voxel. This is a
JaggedTensorwith shape:(num_rays, num_voxels_per_ray, 2), wherenum_voxels_per_rayvaries per ray up tomax_voxels. Each element contains a pair of distances(t_entry, t_exit), representing where the ray enters and exits the voxel along its direction.
- world_to_voxel(points: Tensor) Tensor[source]
Convert world space coordinates to voxel space coordinates using the world-to-voxel transformation of this
Grid.Note
This method supports backpropagation through the transformation operation.
See also
voxel_to_world()for the inverse transformation, andvoxel_to_world_matrixandworld_to_voxel_matrixfor the actual transformation matrices.- Parameters:
points (torch.Tensor) – World-space positions to convert. A
torch.Tensorwith shape(num_points, 3).- Returns:
voxel_points (torch.Tensor) – Grid coordinates. A
torch.Tensorwith shape(num_points, 3). Can contain fractional values.
- class fvdb.GridBatch(*, impl: GridBatch)[source]
A batch of sparse voxel grids with support for efficient operations.
GridBatchrepresents a collection of sparse 3D voxel grids that can be processed together efficiently on GPU. Each grid in the batch can have different resolutions, origins, and voxel sizes. The class provides methods for common operations like sampling, convolution, pooling, dilation, union, etc. It also provides more advanced features such as marching cubes, TSDF fusion, and fast ray marching.A
GridBatchcan be thought of as a mini-batch ofGridinstances and, like theGrid, does not collect the sparse voxel grids’ data but only collects their structure (or topology). Voxel data (e.g., features, colors, densities) for the collection of grids is stored separately as anJaggedTensorassociated with theGridBatch. This separation allows for flexibility in the type and number of channels of data with which a grid can be used to index into. This also allows multiple grids to share the same data storage if desired.When using a
GridBatch, there are three important coordinate systems to be aware of:World Space: The continuous 3D coordinate system in which each grid in the batch exists.
Voxel Space: The discrete voxel index system of each grid in the batch, where each voxel is identified by its integer indices (i, j, k).
Index Space: The linear indexing of active voxels in each grid’s internal storage.
At its core, a
GridBatchuses a very fast mapping from each grid’s voxel space into index space to perform operations on afvdb.JaggedTensorof data associated with the grids in the batch. This mapping allows for efficient access and manipulation of voxel data. For example:voxel_coords = torch.tensor([[8, 7, 6], [1, 2, 3], [4, 5, 6]], device="cuda") # Voxel space coordinates batch_voxel_coords = fvdb.JaggedTensor( [voxel_coords, voxel_coords + 44, voxel_coords - 44] ) # Voxel space coordinates for 3 grids in the batch # Create a GridBatch containing 3 grids with the 3 sets of voxel coordinates such that the voxels # have a world space size of 1x1x1, and where the [0, 0, 0] voxel in voxel space of each grid is at world space origin (0, 0, 0). grid_batch = fvdb.GridBatch.from_ijk(batch_voxel_coords, voxel_sizes=1.0, origins=0.0, device="cuda") # Create some data associated with the grids - here we have 9 voxels and 2 channels per voxel voxel_data = torch.randn(grid_batch.total_voxels, 2, device="cuda") # Index space data # Map voxel space coordinates to index space indices = grid_batch.ijk_to_index(batch_voxel_coords, cumulative=True).jdata # Shape: (9,) # Access the data for the specified voxel coordinates selected_data = voxel_data[indices] # Shape: (9, 2)
Note
A
GridBatchmay contain zero grids, in which case it has no voxel sizes nor origins that can be queried. It may also contain one or more empty grids, which means grids that have zero voxels. An empty grid still has a voxel size and origin, which can be queried.Note
The grids are stored in a sparse format using NanoVDB where only active (non-empty) voxels are allocated, making it extremely memory efficient for representing large volumes with sparse occupancy.
Note
The
GridBatchconstructor is for internal use only. To create aGridBatchwith actual content, use the classmethods:from_zero_grids(): for an empty grid batch where grid-count = 0.from_zero_voxels(): for a grid batch where each grid has zero voxels.from_dense(): for a grid batch where each grid is dense datafrom_dense_axis_aligned_bounds(): for a grid batch where each grid is dense data defined by axis-aligned boundsfrom_grid(): for a grid batch from a singleGridinstancefrom_ijk(): for a grid batch from explicit voxel coordinatesfrom_mesh(): for a grid batch from triangle meshesfrom_points(): for a grid batch from point cloudsfrom_nearest_voxels_to_points(): for a grid batch from nearest voxels to points
- max_grids_per_batch
Maximum number of grids that can be stored in a single
fvdb.GridBatch.- Type:
int
- property address: int
The address of the underlying C++ NanoVDB grid batch object.
- Returns:
address (int) – The memory address of the underlying C++ object.
- property all_have_zero_voxels: bool
Trueif all grids in thisGridBatchhave zero active voxels,Falseotherwise.Note
This returns
Trueif the batch has zero grids or if all grids have zero voxels.- Returns:
all_have_zero_voxels (bool) – Whether all grids have zero active voxels.
- property any_have_zero_voxels: bool
Trueif at least one grid in thisGridBatchhas zero active voxels,Falseotherwise.Note
This returns
Trueif the batch has zero grids or if any grid has zero voxels.- Returns:
any_have_zero_voxels (bool) – Whether any grid has zero active voxels.
- avg_pool(pool_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, data: JaggedTensor, stride: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, coarse_grid: GridBatch | None = None) tuple[JaggedTensor, GridBatch][source]
Apply average pooling to the given data associated with this
GridBatchreturned as data associated with the givencoarse_gridor a newly created coarseGridBatch.Performs average pooling on the voxel data, reducing the resolution by the specified
pool_factor. Each output voxel contains the average of the corresponding input voxels within the pooling window. The pooling operation respects the sparse structure of thisGridBatchand the givencoarse_grid.Note
If you pass
coarse_grid = None, the returned coarse grid batch will have its voxel sizes multiplied by thepool_factorand origins adjusted accordingly.Note
This method supports backpropagation through the pooling operation.
- Parameters:
pool_factor (NumericMaxRank1) – The factor by which to downsample the grids, broadcastable to shape
(3,), integer dtypedata (JaggedTensor) – The voxel data to pool. A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, channels).stride (NumericMaxRank1) – The stride to use when pooling. If
0(default), stride equalspool_factor, broadcastable to shape(3,), integer dtypecoarse_grid (GridBatch, optional) – Pre-allocated coarse grid batch to use for output. If
None, a newGridBatchis created.
- Returns:
pooled_data (JaggedTensor) – A
fvdb.JaggedTensorcontaining the pooled voxel data with shape(batch_size, coarse_total_voxels, channels).coarse_grid (GridBatch) – A
GridBatchobject representing the coarse grid batch topology after pooling. Matches the providedcoarse_gridif given.
- bbox_at(bi: int) Tensor[source]
Get the bounding box of the bi^th grid in the batch.
- Parameters:
bi (int) – The batch index of the grid.
- Returns:
bbox (torch.Tensor) – A tensor of shape
(2, 3)wherebbox = [[bmin_i, bmin_j, bmin_k], [bmax_i, bmax_j, bmax_k]]is the half-open bounding box such thatbmin <= ijk < bmaxfor all active voxelsijkin thebi-th grid.
- property bboxes: Tensor
The voxel-space bounding boxes of each grid in this
GridBatch.Note
The bounding boxes are inclusive of the minimum voxel and the maximum voxel.
e.g. if a grid has a single voxel at index
(0, 0, 0), its bounding box will be[[0, 0, 0], [0, 0, 0]].e.g. if a grid has voxels at indices
(0, 0, 0)and(1, 1, 1), its bounding box will be[[0, 0, 0], [1, 1, 1]].- Returns:
bboxes (torch.Tensor) – A
(grid_count, 2, 3)-shaped tensor where each entry represents the minimum and maximum voxel indices of the bounding box for each grid. If a grid has zero voxels, its bounding box is a zero tensor.
- clip(features: JaggedTensor, ijk_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]], ijk_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]]) tuple[JaggedTensor, GridBatch][source]
Creates a new
fvdb.GridBatchcontaining only the voxels that fall within the specified bounding box range[ijk_min, ijk_max]for each grid in the batch, and returns the corresponding clipped features.Note
This method supports backpropagation through the clipping operation.
- Parameters:
features (JaggedTensor) – The voxel features to clip. A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, channels).ijk_min (NumericMaxRank2) – Minimum bounds in voxel space for each grid, broadcastable to shape
(batch_size, 3), integer dtypeijk_max (NumericMaxRank2) – Maximum bounds in voxel space for each grid, broadcastable to shape
(batch_size, 3), integer dtype
- Returns:
clipped_features (JaggedTensor) – A
fvdb.JaggedTensorcontaining the clipped voxel features with shape(batch_size, clipped_total_voxels, channels).clipped_grid (GridBatch) – A new
fvdb.GridBatchcontaining only voxels within the specified bounds for each grid.
- clipped_grid(ijk_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]], ijk_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]]) GridBatch[source]
Return a new
GridBatchrepresenting the clipped version of this batch of grids. Each voxel[i, j, k]in each grid of the input batch is included in the output if it lies withinijk_minandijk_maxfor that grid.- Parameters:
ijk_min (NumericMaxRank2) – Voxel space minimum bound of the clip region for each grid, broadcastable to shape
(batch_size, 3), integer dtypeijk_max (NumericMaxRank2) – Voxel space maximum bound of the clip region for each grid, broadcastable to shape
(batch_size, 3), integer dtype
- Returns:
clipped_grid (GridBatch) – A
GridBatchrepresenting the clipped version of this grid batch.
- coarsened_grid(coarsening_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size) GridBatch[source]
Return a
GridBatchrepresenting the coarsened version of this batch of grids. Each voxel[i, j, k]in the input that satisfiesi % coarsening_factor[0] == 0,j % coarsening_factor[1] == 0, andk % coarsening_factor[2] == 0is included in the output.- Parameters:
coarsening_factor (NumericMaxRank1) – The factor by which to coarsen each grid, broadcastable to shape
(3,), integer dtype- Returns:
coarsened_grid (GridBatch) – A
GridBatchrepresenting the coarsened version of this grid batch.
- contiguous() GridBatch[source]
Return a contiguous copy of the grid batch.
Ensures that the underlying data is stored contiguously in memory, which can improve performance for subsequent operations.
- Returns:
grid_batch (GridBatch) – A new GridBatch with contiguous memory layout.
- conv_grid(kernel_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, stride: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1) GridBatch[source]
Return a
GridBatchrepresenting the active voxels at the output of a convolution applied to this batch with a given kernel.- Parameters:
kernel_size (NumericMaxRank1) – Size of the kernel to convolve with, broadcastable to shape
(3,), integer dtype.stride (NumericMaxRank1) – Stride to use when convolving, broadcastable to shape
(3,), integer dtype.
- Returns:
conv_grid (GridBatch) – A GridBatch representing the convolution of this grid batch.
- coords_in_grid(ijk: JaggedTensor) JaggedTensor[source]
Check which voxel-space coordinates lie on active voxels for each grid.
- Parameters:
ijk (JaggedTensor) – Per-grid voxel coordinates to test. A
fvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b, 3)with integer dtype.- Returns:
mask (JaggedTensor) – Boolean mask per-grid indicating which coordinates map to active voxels. A
fvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b).
- cpu() GridBatch[source]
Move the grid batch to CPU.
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchon CPU device.
- cubes_in_grid(cube_centers: JaggedTensor, cube_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, cube_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) JaggedTensor[source]
Check if axis-aligned cubes are fully contained within the grid.
Tests whether cubes defined by their centers and bounds are completely inside the active voxels of the grid.
- Parameters:
cube_centers (JaggedTensor) – Centers of the cubes in world coordinates. A
fvdb.JaggedTensorwith shape(batch_size, num_cubes_for_grid_b, 3).cube_min (NumericMaxRank1) – Minimum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtypecube_max (NumericMaxRank1) – Maximum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtype
- Returns:
mask (JaggedTensor) – Boolean mask indicating which cubes are fully contained in the grid. A
fvdb.JaggedTensorwith shape(batch_size, num_cubes_for_grid_b).
- cubes_intersect_grid(cube_centers: JaggedTensor, cube_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, cube_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) JaggedTensor[source]
Check if axis-aligned cubes intersect with the grid.
Tests whether cubes defined by their centers and bounds have any intersection with the active voxels of the grid.
- Parameters:
cube_centers (JaggedTensor) – Centers of the cubes in world coordinates. A
fvdb.JaggedTensorwith shape(batch_size, num_cubes_for_grid_b, 3).cube_min (NumericMaxRank1) – Minimum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtypecube_max (NumericMaxRank1) – Maximum offsets from center defining cube bounds, broadcastable to shape
(3,), floating dtype
- Returns:
mask (JaggedTensor) – Boolean mask indicating which cubes intersect the grid. A
fvdb.JaggedTensorwith shape(batch_size, num_cubes_for_grid_b).
- cuda() GridBatch[source]
Move the grid batch to CUDA device.
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchon CUDA device.
- property cum_voxels: Tensor
The cumulative number of voxels up to and including each grid in this
GridBatch.Note
This is useful for indexing into flattened data structures where all voxels from all grids are concatenated together.
- Returns:
cum_voxels (torch.Tensor) – A
(grid_count,)-shaped tensor where each element represents the cumulative sum of voxels up to and including that grid.
- cum_voxels_at(bi: int) int[source]
Get the cumulative number of voxels up to and including a specific grid.
- Parameters:
bi (int) – The batch index of the grid.
- Returns:
cum_voxels (int) – The cumulative number of voxels up to and including grid
bi.
- property device: device
The
torch.devicewhere thisGridBatchis stored.- Returns:
device (torch.device) – The device of the batch.
- dilated_grid(dilation: int) GridBatch[source]
Return the grid dilated by a given number of voxels.
- Parameters:
dilation (int) – The dilation radius in voxels.
- Returns:
dilated_grid (GridBatch) – A new
fvdb.GridBatchwith dilated active regions.
- dual_bbox_at(bi: int) Tensor[source]
Get the dual bounding box of a specific grid in the batch.
The dual grid has voxel centers at the corners of the primal grid voxels.
See also
dual_grid()to compute the actual dual grid.- Parameters:
bi (int) – The batch index of the grid.
- Returns:
dual_bbox (torch.Tensor) – A tensor of shape
(2, 3)containing the minimum and maximum coordinates of the dual bounding box in voxel space.
- property dual_bboxes: Tensor
The voxel-space bounding boxes of the dual of each grid in this
GridBatch. i.e. the bounding boxes of the grids whose voxel centers correspond to voxel corners in the original grids.See also
bboxesfor the bounding boxes of the grids in thisGridBatch, anddual_grid()for computing the dual grids.Note
The bounding boxes are inclusive of the minimum voxel and the maximum voxel.
e.g. if a grid has a single voxel at index
(0, 0, 0), the dual grid will contain voxels at indices(0, 0, 0), (0, 0, 1), (0, 1, 0), ..., (1, 1, 1), and the bounding box will be[[0, 0, 0], [1, 1, 1]].- Returns:
dual_bboxes (torch.Tensor) – A
(grid_count, 2, 3)-shaped tensor where each entry represents the minimum and maximum voxel indices of the dual bounding box for each grid. If a grid has zero voxels, its dual bounding box is a zero tensor.
- dual_grid(exclude_border: bool = False) GridBatch[source]
Return the dual grid where voxel centers correspond to corners of the primal grid.
The dual grid is useful for staggered grid discretizations and finite difference operations.
- Parameters:
exclude_border (bool) – If
True, excludes border voxels that would extend beyond the primal grid bounds. Default isFalse.- Returns:
dual_grid (GridBatch) – A new
fvdb.GridBatchrepresenting the dual grid.
- classmethod from_dense(num_grids: int, dense_dims: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, ijk_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, voxel_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 1, origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 0, mask: Tensor | None = None, device: str | device | None = None) GridBatch[source]
Create a batch of dense grids.
A dense grid has a voxel for every coordinate in an axis-aligned box.
For each grid in the batch, the dense grid is defined by:
dense_dims: the size of the dense grids (shape
[3,] = [W, H, D])ijk_min: the minimum voxel index for the grid (shape
[3,] = [i_min, j_min, k_min])voxel_sizes: the world-space size of each voxel (shape
[3,] = [sx, sy, sz])origins: the world-space coordinate of the center of the
[0,0,0]voxel of the grid (shape[3,] = [x0, y0, z0])mask: indicates which voxels are “active” in the resulting grids.
Note
voxel_sizesandoriginsmay be provided per-grid or broadcast across the batch.ijk_minanddense_dimsapply to all grids in the batch.maskapplies to all grids.- Parameters:
num_grids (int) – Number of grids to create.
dense_dims (NumericMaxRank1) – Dimensions of the dense grid for all grids in the batch, broadcastable to shape
(3,), integer dtype.ijk_min (NumericMaxRank1) – Minimum voxel index for the grids, for all grids in the batch broadcastable to shape
(3,), integer dtype.voxel_sizes (NumericMaxRank2) – World-space size of each voxel, per-grid; broadcastable to shape
(num_grids, 3), floating dtype.origins (NumericMaxRank2) – World-space coordinate of the center of the
[0,0,0]voxel, per-grid; broadcastable to shape(num_grids, 3), floating dtype.mask (torch.Tensor | None) – Optional boolean mask with shape
(W, H, D)selecting active voxels.device (DeviceIdentifier | None) – Device to create the grid batch on. Defaults to
None, which inherits frommaskif provided, otherwise uses"cpu".
- Returns:
grid_batch (GridBatch) – A new
GridBatchobject.
Examples
grid_batch = fvdb.GridBatch.from_dense( num_grids=5, dense_dims=[10, 10, 10], voxel_sizes=[1.0, 1.0, 1.0], origins=[0.0, 0.0, 0.0], mask=None, device="cuda", ) grid_batch.grid_count # 5 grid_batch.voxel_sizes == tensor([[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]) grid_batch.origins == tensor([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0], [0.0, 0.0, 0.0]])
- classmethod from_dense_axis_aligned_bounds(num_grids: int, dense_dims: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, bounds_min: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, bounds_max: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 1, voxel_center: bool = False, device: str | device = 'cpu') GridBatch[source]
Create a
fvdb.GridBatchrepresenting a batch of dense grids defined by axis-aligned bounds.The resulting grids have voxels spanning
dense_dimswith voxel sizes and origins computed to fit within the world-space box[bounds_min, bounds_max].If
voxel_centerisTrue, the bounds correspond to the centers of the corner voxels. Ifvoxel_centerisFalse, the bounds correspond to the outer edges of the corner voxels.- Parameters:
num_grids (int) – Number of grids to create.
dense_dims (NumericMaxRank1) – Dimensions of the dense grids, broadcastable to shape
(3,), integer dtype.bounds_min (NumericMaxRank1) – Minimum world-space coordinate for all grids, broadcastable to shape
(3,), floating dtype.bounds_max (NumericMaxRank1) – Maximum world-space coordinate for all grids, broadcastable to shape
(3,), floating dtype.voxel_center (bool) – Whether the bounds correspond to voxel centers (
True) or edges (False). Defaults toFalse.device (DeviceIdentifier) – Device to create the grids on. Defaults to
"cpu".
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject.
Examples
grid_batch = fvdb.GridBatch.from_dense_axis_aligned_bounds( num_grids=5, dense_dims=[10, 10, 10], bounds_min=[-1.0, -1.0, -1.0], bounds_max=[1.0, 1.0, 1.0], voxel_center=False, device="cuda", ) grid_batch.grid_count # 5 grid_batch.voxel_sizes # tensor([[0.2, 0.2, 0.2], [0.2, 0.2, 0.2], [0.2, 0.2, 0.2], [0.2, 0.2, 0.2], [0.2, 0.2, 0.2]]) grid_batch.origins # tensor([[-.9, -.9, -.9], [-.9, -.9, -.9], [-.9, -.9, -.9], [-.9, -.9, -.9], [-.9, -.9, -.9]])
- classmethod from_grid(grid: Grid) GridBatch[source]
Create a
fvdb.GridBatchof batch size 1 from a singlefvdb.Grid.- Parameters:
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject.
Examples
grid = fvdb.Grid.from_ijk( ijk=torch.tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]]), voxel_size=[1.0, 1.0, 1.0], origin=[0.0, 0.0, 0.0], device="cuda", ) grid_batch = fvdb.GridBatch.from_grid(grid) grid_batch.grid_count # 1 grid_batch.ijk.jdata == tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]])
- classmethod from_ijk(ijk: JaggedTensor, voxel_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 1, origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 0, device: str | device | None = None) GridBatch[source]
Create a batch of grids from voxel-space coordinates. If multiple voxels in a grid map to the same coordinate, only one voxel will be created at that coordinate.
- Parameters:
ijk (JaggedTensor) – Per-grid voxel coordinates to populate. Shape:
(batch_size, num_voxels_for_grid_b, 3)with integer coordinates.voxel_sizes (NumericMaxRank2) – Size of each voxel, per-grid; broadcastable to shape
(batch_size, 3), floating dtypeorigins (NumericMaxRank2) – World-space coordinate of the center of the
[0,0,0]voxel, per-grid; broadcastable to shape(batch_size, 3), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid batch on. Defaults to
None, which inherits fromijk.
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject.
Examples
ijk = fvdb.JaggedTensor(torch.tensor([ [0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1] ])) grid_batch = fvdb.GridBatch.from_ijk(ijk=ijk, voxel_sizes=[1.0, 1.0, 1.0], origins=[0.0, 0.0, 0.0]) grid_batch.grid_count # 1 grid_batch.ijk.jdata == tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]])
- classmethod from_mesh(mesh_vertices: JaggedTensor, mesh_faces: JaggedTensor, voxel_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 1, origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 0, device: str | device | None = None) GridBatch[source]
Create a
fvdb.GridBatchby voxelizing the surface of a set of triangle meshes. i.e. voxels that intersect the surface of the meshes will be contained in the resultingfvdb.GridBatch.- Parameters:
mesh_vertices (JaggedTensor) – Per-grid mesh vertex positions. A
fvdb.JaggedTensorwith shape(batch_size, num_vertices_for_grid_b, 3).mesh_faces (JaggedTensor) – Per-grid mesh face indices. A
fvdb.JaggedTensorwith shape(batch_size, num_faces_for_grid_b, 3).voxel_sizes (NumericMaxRank2) – Size of each voxel, per-grid; broadcastable to shape
(batch_size, 3), floating dtypeorigins (NumericMaxRank2) – World-space coordinate of the center of the
[0,0,0]voxel, per-grid; broadcastable to shape(batch_size, 3), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid batch on. Defaults to
None, which inherits frommesh_vertices.
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject with voxels covering the surfaces of the input meshes.
Examples
mesh_vertices = fvdb.JaggedTensor(torch.tensor([ [0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.0, 0.0, 1.0], [0.0, 1.0, 1.0], [1.0, 1.0, 1.0] ])) mesh_faces = fvdb.JaggedTensor(torch.tensor([ [0, 1, 2], [1, 3, 2], [4, 5, 6], [5, 7, 6], [0, 1, 4], [1, 5, 4], [2, 3, 6], [3, 7, 6], [0, 2, 4], [2, 6, 4], [1, 3, 5], [3, 7, 5] ])) grid_batch = fvdb.GridBatch.from_mesh(mesh_vertices, mesh_faces, voxel_sizes=[1.0, 1.0, 1.0], origins=[0.0, 0.0, 0.0]) grid_batch.grid_count # 1 grid_batch.ijk.jdata == tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]])
- classmethod from_nanovdb(path: str, *, device: str | device = 'cpu', verbose: bool = False) tuple[GridBatch, JaggedTensor, list[str]][source]
- classmethod from_nanovdb(path: str, *, indices: list[int], device: str | device = 'cpu', verbose: bool = False) tuple[GridBatch, JaggedTensor, list[str]]
- classmethod from_nanovdb(path: str, *, index: int, device: str | device = 'cpu', verbose: bool = False) tuple[GridBatch, JaggedTensor, list[str]]
- classmethod from_nanovdb(path: str, *, names: list[str], device: str | device = 'cpu', verbose: bool = False) tuple[GridBatch, JaggedTensor, list[str]]
- classmethod from_nanovdb(path: str, *, name: str, device: str | device = 'cpu', verbose: bool = False) tuple[GridBatch, JaggedTensor, list[str]]
Load a grid batch from a .nvdb file.
- Parameters:
path – The path to the .nvdb file to load
indices – Optional list of indices to load from the file (mutually exclusive with other selectors)
index – Optional single index to load from the file (mutually exclusive with other selectors)
names – Optional list of names to load from the file (mutually exclusive with other selectors)
name – Optional single name to load from the file (mutually exclusive with other selectors)
device – Which device to load the grid batch on
verbose – If set to true, print information about the loaded grids
- Returns:
grid_batch (GridBatch) – A
fvdb.GridBatchcontaining the loaded grids.data (JaggedTensor) – A
fvdb.JaggedTensorcontaining the data of the grids.names (list[str]) – A list of strings containing the name of each grid.
- classmethod from_nearest_voxels_to_points(points: JaggedTensor, voxel_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 1, origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 0, device: str | device | None = None) GridBatch[source]
Create grids by adding the eight nearest voxels to every input point.
- Parameters:
points (JaggedTensor) – Per-grid point positions to populate the grid from. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).voxel_sizes (NumericMaxRank2) – Size of each voxel, per-grid; broadcastable to shape
(batch_size, 3), floating dtypeorigins (NumericMaxRank2) – World-space coordinate of the center of the
[0,0,0]voxel, per-grid; broadcastable to shape(batch_size, 3), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid batch on. Defaults to
None, which inherits frompoints.
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject.
Examples
points = fvdb.JaggedTensor(torch.tensor([ [0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.0, 0.0, 1.0], [0.0, 1.0, 1.0], [1.0, 1.0, 1.0] ])) grid_batch = fvdb.GridBatch.from_nearest_voxels_to_points(points, voxel_sizes=[1.0, 1.0, 1.0], origins=[0.0, 0.0, 0.0]) grid_batch.grid_count # 1 grid_batch.ijk.jdata == tensor([[0, 0, 0], [0, 0, 1], [0, 0, 2], [0, 1, 0], [0, 1, 1], [0, 1, 2], [0, 2, 0], [0, 2, 1], [0, 2, 2], [1, 0, 0], [1, 0, 1], [1, 0, 2], [1, 1, 0], [1, 1, 1], [1, 1, 2], [1, 2, 0], [1, 2, 1], [1, 2, 2], [2, 0, 0], [2, 0, 1], [2, 0, 2], [2, 1, 0], [2, 1, 1], [2, 1, 2], [2, 2, 0], [2, 2, 1], [2, 2, 2]], dtype=torch.int32)
- classmethod from_points(points: JaggedTensor, voxel_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 1, origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 0, device: str | device | None = None) GridBatch[source]
Create a batch of grids from a batch of point clouds.
- Parameters:
points (JaggedTensor) – Per-grid point positions to populate the grid from. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).voxel_sizes (NumericMaxRank2) – Size of each voxel, per-grid; broadcastable to shape
(batch_size, 3), floating dtypeorigins (NumericMaxRank2) – World-space coordinate of the center of the
[0,0,0]voxel, per-grid; broadcastable to shape(batch_size, 3), floating dtypedevice (DeviceIdentifier | None) – Device to create the grid batch on. Defaults to
None, which inherits frompoints.
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject.
Examples
points = fvdb.JaggedTensor(torch.tensor([ [0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [1.0, 1.0, 0.0], [0.0, 0.0, 1.0], [1.0, 0.0, 1.0], [0.0, 1.0, 1.0], [1.0, 1.0, 1.0] ])) grid_batch = fvdb.GridBatch.from_points(points, voxel_sizes=[1.0, 1.0, 1.0], origins=[0.0, 0.0, 0.0]) grid_batch.grid_count # 1 grid_batch.ijk.jdata == tensor([[0, 0, 0], [0, 0, 1], [0, 1, 0], [1, 0, 0], [1, 1, 0], [1, 0, 1], [0, 1, 1], [1, 1, 1]])
- classmethod from_zero_grids(device: str | device = 'cpu') GridBatch[source]
Create a
fvdb.GridBatchwith zero grids. It retains its device identifier, but has no other information like voxel size or origin or bounding box. It will reportgrid_count == 0.- Parameters:
device (DeviceIdentifier) – The device to create the
fvdb.GridBatchon. Can be a string (e.g.,"cuda","cpu") or atorch.deviceobject. Defaults to"cpu".- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject.
Examples
grid_batch = fvdb.GridBatch.from_zero_grids("cuda") grid_batch.grid_count # 0
- classmethod from_zero_voxels(device: str | device = 'cpu', voxel_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 1, origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] = 0) GridBatch[source]
Create a
fvdb.GridBatchwith one or more zero-voxel grids on a specific device.A zero-voxel grid batch does not mean there are zero grids. It means that the grids have zero voxels. This constructor will create as many zero-voxel grids as the batch size of
voxel_sizesandorigins, defaulting to 1 grid, though for that case, you should use the single-gridfvdb.Gridconstructor instead.- Parameters:
device (DeviceIdentifier) – The device to create the
fvdb.GridBatchon. Can be a string (e.g.,"cuda","cpu") or atorch.deviceobject. Defaults to"cpu".voxel_sizes (NumericMaxRank2) – The default size per voxel, broadcastable to shape
(num_grids, 3), floating dtypeorigins (NumericMaxRank2) – The default origin of the grid, broadcastable to shape
(num_grids, 3), floating dtype
- Returns:
grid_batch (GridBatch) – A new
fvdb.GridBatchobject with zero-voxel grids.
Examples
grid_batch = GridBatch.from_zero_voxels("cuda", 1, 0) # string grid_batch = GridBatch.from_zero_voxels(torch.device("cuda:0"), 1, 0) # device directly grid_batch = GridBatch.from_zero_voxels(voxel_sizes=1, origins=0) # defaults to CPU
- property grid_count: int
The number of grids in this
GridBatch.- Returns:
count (int) – Number of grids.
- has_same_address_and_grid_count(other: Any) bool[source]
Check if two grid batches have the same address and grid count.
- property has_zero_grids: bool
Trueif thisGridBatchcontains zero grids,Falseotherwise.- Returns:
has_zero_grids (bool) – Whether the batch has zero grids.
- has_zero_voxels_at(bi: int) bool[source]
Check if a specific grid in the batch is empty, which means it has zero voxels.
- Parameters:
bi (int) – The batch index of the grid.
- Returns:
is_empty (bool) – True if the grid is empty, False otherwise.
- hilbert(offset: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) JaggedTensor[source]
Return Hilbert curve codes for active voxels in this grid batch.
Hilbert curves provide better spatial locality than Morton codes by ensuring that nearby points in 3D space are also nearby in the 1D curve ordering.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
JaggedTensor – A JaggedTensor of shape [num_grids, -1, 1] containing the Hilbert codes for each active voxel in the batch.
- hilbert_zyx(offset: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) JaggedTensor[source]
Return transposed Hilbert curve codes for active voxels in this grid batch.
Transposed Hilbert curves use zyx ordering instead of xyz. This variant can provide better spatial locality for certain access patterns.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
JaggedTensor – A JaggedTensor of shape [num_grids, -1, 1] containing the transposed Hilbert codes for each active voxel in the batch.
- property ijk: JaggedTensor
The voxel coordinates of every active voxel in each grid of this
GridBatch, in index order.- Returns:
ijk (JaggedTensor) – A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, 3)containing the voxel coordinates of each active voxel in index order for each grid.
- ijk_to_index(ijk: JaggedTensor, cumulative: bool = False) JaggedTensor[source]
Convert voxel-space coordinates to linear index-space for each grid.
Maps 3D voxel space coordinates to their corresponding linear indices. Returns
-1for coordinates that don’t correspond to active voxels.- Parameters:
ijk (JaggedTensor) – Per-grid voxel coordinates to convert. A
fvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b, 3)with integer dtype.cumulative (bool) – If
True, return indices cumulative across the whole batch; otherwise per-grid.
- Returns:
indices (JaggedTensor) – Linear indices for each coordinate, or
-1if not active. Afvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b).
- ijk_to_inv_index(ijk: JaggedTensor, cumulative: bool = False) JaggedTensor[source]
Get inverse permutation of
ijk_to_index(). i.e. for each voxel in each grid, return the index in the inputijktensor.- Parameters:
ijk (JaggedTensor) – Voxel coordinates to convert. A
fvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b, 3)with integer coordinates.cumulative (bool) – If
True, returns cumulative indices across the entire batch. IfFalse, returns per-grid indices. Default isFalse.
- Returns:
inv_map (JaggedTensor) – Inverse permutation for
ijk_to_index(). Afvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b).
- index_int(bi: int | integer) GridBatch[source]
Get a subset of grids from the batch using integer indexing.
- Parameters:
bi (int | np.integer) – Grid index.
- Returns:
grid_batch (GridBatch) – A new GridBatch containing the selected grid.
- index_list(indices: list[bool] | list[int]) GridBatch[source]
Get a subset of grids from the batch using list indexing.
- Parameters:
indices (list[bool] | list[int]) – List of indices.
- Returns:
grid_batch (GridBatch) – A new GridBatch containing the selected grids.
- index_slice(s: slice) GridBatch[source]
Get a subset of grids from the batch using slicing.
- Parameters:
s (slice) – Slicing object.
- Returns:
grid_batch (GridBatch) – A new GridBatch containing the selected grids.
- index_tensor(indices: Tensor) GridBatch[source]
Get a subset of grids from the batch using tensor indexing.
- Parameters:
indices (torch.Tensor) – Tensor of indices.
- Returns:
grid_batch (GridBatch) – A new GridBatch containing the selected grids.
- inject_from(src_grid: GridBatch, src: JaggedTensor, dst: JaggedTensor | None = None, default_value: float | int | bool = 0) JaggedTensor[source]
Inject data associated with the source grid batch to a
fvdb.JaggedTensorassociated with this grid batch.Note
The copy occurs in voxel space, the voxel-to-world transform is not applied.
Note
If you pass in destination data,
dst, thendstwill be modified in-place. IfdstisNone, a newfvdb.JaggedTensorwill be created with the shape(self.grid_count, self.total_voxels, *src.eshape)and filled withdefault_valuefor any voxels that do not have corresponding data insrc.Note
This method supports backpropagation through the injection operation.
- Parameters:
src_grid (GridBatch) – The source
fvdb.GridBatchto inject data from.src (JaggedTensor) – Source data associated with
src_grid. This must be afvdb.JaggedTensorwith shape(batch_size, src_grid.total_voxels, *).dst (JaggedTensor | None) – Optional destination data to be modified in-place. This must be a
fvdb.JaggedTensorwith shape(batch_size, self.total_voxels, *)orNone.default_value (float | int | bool) – Value to fill in for voxels that do not have corresponding data in
src. This is used only ifdstisNone. Default is0.
- Returns:
dst (JaggedTensor) – The data copied from
srcdata after injection.
- inject_from_dense_cmajor(dense_data: Tensor, dense_origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) JaggedTensor[source]
Inject values from a dense
torch.Tensorinto afvdb.JaggedTensorassociated with thisGridBatch.This is the “C Major” (channels major) version, which assumes the
dense_datais in CXYZ order. i.e. the dense tensor has shape[batch_size, channels*, dense_size_x, dense_size_y, dense_size_z].Note
This method supports backpropagation through the read operation.
See also
inject_from_dense_cminor()for the “C Minor” (channels minor) version, which assumes thedense_datais in XYZC order.See also
inject_to_dense_cmajor()for writing data to a dense tensor in “C Major” order.- Parameters:
dense_data (torch.Tensor) – Dense
torch.Tensorto read from. Shape:(batch_size, channels*, dense_size_x, dense_size_y, dense_size_z).dense_origins (NumericMaxRank1, optional) – Origin of the dense tensor in voxel space, broadcastable to shape
(3,), integer dtype. Default is(0, 0, 0).
- Returns:
sparse_data (JaggedTensor) – Values from the dense tensor at voxel locations active in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- inject_from_dense_cminor(dense_data: Tensor, dense_origins: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0) JaggedTensor[source]
Inject values from a dense
torch.Tensorinto afvdb.JaggedTensorassociated with thisGridBatch.This is the “C Minor” (channels minor) version, which assumes the
dense_datais in XYZC order. i.e. the dense tensor has shape[batch_size, dense_size_x, dense_size_y, dense_size_z, channels*].Note
This method supports backpropagation through the read operation.
See also
inject_from_dense_cmajor()for the “C Major” (channels major) version, which assumes thedense_datais in CXYZ order.See also
inject_to_dense_cminor()for writing data to a dense tensor in C Minor order.- Parameters:
dense_data (torch.Tensor) – Dense
torch.Tensorto read from. Shape:(batch_size, dense_size_x, dense_size_y, dense_size_z, channels*).dense_origins (NumericMaxRank1, optional) – Origin of the dense tensor in voxel space, broadcastable to shape
(3,), integer dtype. Default is(0, 0, 0).
- Returns:
sparse_data (JaggedTensor) – Values from the dense tensor at voxel locations active in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- inject_from_ijk(src_ijk: JaggedTensor, src: JaggedTensor, dst: JaggedTensor | None = None, default_value: float | int | bool = 0)[source]
Inject data from source voxel coordinates to a sidecar for this grid.
Note
This method supports backpropagation through the injection operation.
- Parameters:
src_ijk (JaggedTensor) – Voxel coordinates in voxel space from which to copy data. Shape:
(B, num_src_voxels, 3).src (JaggedTensor) – Source data to inject. Must match the shape of the destination. Shape:
(B, num_src_voxels, *).dst (JaggedTensor | None) – Optional destination data to be modified in-place. If None, a new JaggedTensor will be created with the same element shape as src and filled with default_value for any voxels that do not have corresponding data in src.
default_value (float | int | bool) – Value to fill in for voxels that do not have corresponding data in src. Default is 0.
- inject_to(dst_grid: GridBatch, src: JaggedTensor, dst: JaggedTensor | None = None, default_value: float | int | bool = 0) JaggedTensor[source]
Inject data from this grid to a destination grid. This method copies sidecar data for voxels in this grid to a sidecar corresponding to voxels in the destination grid.
The copy occurs in “voxel-space”, the voxel-to-world transform is not applied.
If you pass in the destination data (dst), it will be modified in-place. If dst is None, a new JaggedTensor will be created with the same element shape as src and filled with default_value for any voxels that do not have corresponding data in src.
Note
This method supports backpropagation through the injection operation.
- Parameters:
dst_grid (GridBatch) – The destination grid to inject data into.
src (JaggedTensor) – Source data from this grid. Shape:
(batch_size, -1, *).dst (JaggedTensor | None) – Optional destination data to be modified in-place. Shape:
(batch_size, -1, *)orNone.default_value (float | int | bool) – Value to fill in for voxels that do not have corresponding data in src. This is used only if dst is None. Default is 0.
- Returns:
dst (JaggedTensor) – The destination sidecar data after injection.
- inject_to_dense_cmajor(sparse_data: JaggedTensor, min_coord: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] | None = None, grid_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) Tensor[source]
Inject values from a
fvdb.JaggedTensorassociated with thisGridBatchinto a densetorch.Tensor.This is the “C Major” (channels major) version, which assumes the
dense_datais in CXYZ order. i.e. the dense tensor has shape[batch_size, channels*, dense_size_x, dense_size_y, dense_size_z].This method creates the dense tensor to return, and fills it with values from the sparse grids within the range defined by
min_coordandgrid_size. Voxels not present in the sparse grids are filled with zeros.Note
This method supports backpropagation through the write operation.
See also
inject_from_dense_cmajor()for reading from a dense tensor in “C Major” order, which assumes the dense tensor has shape[batch_size, channels*, dense_size_x, dense_size_y, dense_size_z].See also
inject_to_dense_cminor()for writing to a dense tensor in “C Minor” order.- Parameters:
sparse_data (JaggedTensor) – A
fvdb.JaggedTensorof data associated with thisGridBatchwith shape(batch_size, total_voxels, channels*).min_coord (NumericMaxRank2|None) – Minimum voxel coordinate to read from each grid in the batch into the output dense tensor, broadcastable to shape
(batch_size, 3), integer dtype, orNone. If set toNone, this will be the minimum voxel coordinate of each grid’s bounding box.grid_size (NumericMaxRank1|None) – Size of the output dense tensor, broadcastable to shape
(3,), integer dtype, orNone. IfNone, computed to fit all active voxels starting frommin_coord.
- Returns:
dense_data (torch.Tensor) – Dense
torch.Tensorcontaining the sparse data with shape(batch_size, channels*, dense_size_x, dense_size_y, dense_size_z).
- inject_to_dense_cminor(sparse_data: JaggedTensor, min_coord: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] | None = None, grid_size: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) Tensor[source]
Inject values from a
fvdb.JaggedTensorassociated with thisGridBatchinto a densetorch.Tensor.This is the “C Minor” (channels minor) version, which assumes the
dense_datais in XYZC order. i.e. the dense tensor has shape[batch_size, dense_size_x, dense_size_y, dense_size_z, channels*].This method creates the dense tensor to return, and fills it with values from the sparse grids within the range defined by
min_coordandgrid_size. Voxels not present in the sparse grids are filled with zeros.Note
This method supports backpropagation through the write operation.
See also
inject_from_dense_cminor()for reading from a dense tensor in “C Minor” order, which assumes the dense tensor has shape[batch_size, dense_size_x, dense_size_y, dense_size_z, channels*].See also
inject_to_dense_cmajor()for writing to a dense tensor in “C Major” order.- Parameters:
sparse_data (JaggedTensor) – A
fvdb.JaggedTensorof data associated with thisGridBatchwith shape(batch_size, total_voxels, channels*).min_coord (NumericMaxRank2|None) – Minimum voxel coordinate to read from each grid in the batch into the output dense tensor, broadcastable to shape
(batch_size, 3), integer dtype, orNone. If set toNone, this will be the minimum voxel coordinate of each grid’s bounding box.grid_size (NumericMaxRank1|None) – Size of the output dense tensor, broadcastable to shape
(3,), integer dtype, orNone. IfNone, computed to fit all active voxels starting frommin_coord.
- Returns:
dense_data (torch.Tensor) – Dense
torch.Tensorcontaining the sparse data with shape(batch_size, dense_size_x, dense_size_y, dense_size_z, channels*).
- integrate_tsdf(truncation_distance: float, projection_matrices: Tensor, cam_to_world_matrices: Tensor, tsdf: JaggedTensor, weights: JaggedTensor, depth_images: Tensor, weight_images: Tensor | None = None) tuple[GridBatch, JaggedTensor, JaggedTensor][source]
Integrate depth images into a Truncated Signed Distance Function (TSDF) volume.
Updates the TSDF values and weights in the voxel grid by integrating new depth observations from multiple camera viewpoints. This is commonly used for 3D reconstruction from RGB-D sensors.
- Parameters:
truncation_distance (float) – Maximum distance to truncate TSDF values (in world units).
projection_matrices (torch.Tensor) – Camera projection matrices. Shape:
(batch_size, 3, 3).cam_to_world_matrices (torch.Tensor) – Camera to world transformation matrices. Shape:
(batch_size, 4, 4).tsdf (JaggedTensor) – Current TSDF values for each voxel. Shape:
(batch_size, total_voxels, 1).weights (JaggedTensor) – Current integration weights for each voxel. Shape:
(batch_size, total_voxels, 1).depth_images (torch.Tensor) – Depth images from cameras. Shape:
(batch_size, height, width).weight_images (torch.Tensor, optional) – Weight of each depth sample in the images. Shape:
(batch_size, height, width). If None, defaults to uniform weights.
- Returns:
updated_grid (GridBatch) – Updated GridBatch with potentially expanded voxels.
updated_tsdf (JaggedTensor) – Updated TSDF values as JaggedTensor.
updated_weights (JaggedTensor) – Updated weights as JaggedTensor.
- integrate_tsdf_with_features(truncation_distance: float, projection_matrices: Tensor, cam_to_world_matrices: Tensor, tsdf: JaggedTensor, features: JaggedTensor, weights: JaggedTensor, depth_images: Tensor, feature_images: Tensor, weight_images: Tensor | None = None) tuple[GridBatch, JaggedTensor, JaggedTensor, JaggedTensor][source]
Integrate depth and feature images into TSDF volume with features.
Similar to integrate_tsdf but also integrates feature observations (e.g., color) along with the depth information. This is useful for colored 3D reconstruction.
- Parameters:
truncation_distance (float) – Maximum distance to truncate TSDF values (in world units).
projection_matrices (torch.Tensor) – Camera projection matrices. Shape:
(batch_size, 3, 3).cam_to_world_matrices (torch.Tensor) – Camera to world transformation matrices. Shape:
(batch_size, 4, 4).tsdf (JaggedTensor) – Current TSDF values for each voxel. Shape:
(batch_size, total_voxels, 1).features (JaggedTensor) – Current feature values for each voxel. Shape:
(batch_size, total_voxels, feature_dim).weights (JaggedTensor) – Current integration weights for each voxel. Shape:
(batch_size, total_voxels, 1).depth_images (torch.Tensor) – Depth images from cameras. Shape:
(batch_size, height, width).feature_images (torch.Tensor) – Feature images (e.g., RGB) from cameras. Shape:
(batch_size, height, width, feature_dim).weight_images (torch.Tensor, optional) – Weight of each depth sample in the images. Shape:
(batch_size, height, width). If None, defaults to uniform weights.
- Returns:
updated_grid (GridBatch) – Updated GridBatch with potentially expanded voxels.
updated_tsdf (JaggedTensor) – Updated TSDF values as JaggedTensor.
updated_weights (JaggedTensor) – Updated weights as JaggedTensor.
updated_features (JaggedTensor) – Updated features as JaggedTensor.
- is_contiguous() bool[source]
Check if the grid batch data is stored contiguously in memory.
- Returns:
is_contiguous (bool) – True if the data is contiguous, False otherwise.
- is_same(other: GridBatch) bool[source]
Check if two grid batches share the same underlying data in memory.
- Parameters:
other (GridBatch) – The other grid batch to compare with.
- Returns:
is_same (bool) – True if the grid batches have the same underlying data in memory, False otherwise.
- jagged_like(data: Tensor) JaggedTensor[source]
Create a JaggedTensor with the same jagged structure as this grid batch.
Useful for creating feature tensors that match the grid’s voxel layout.
- Parameters:
data (torch.Tensor) – Dense data to convert to jagged format. Shape:
(total_voxels, channels).- Returns:
jagged_data (JaggedTensor) – Data in jagged format matching the grid structure.
- property jidx: Tensor
The jagged index tensor indicating which grid each voxel belongs to.
Note
This property is part of the
fvdb.JaggedTensorstructure and is useful for operations that need to know the grid index for each voxel.- Returns:
jidx (torch.Tensor) – A
(total_voxels,)-shaped integer tensor where each element is the grid index (0 to grid_count-1) that the voxel at that position belongs to.
- property joffsets: Tensor
The jagged offset tensor indicating the start index of voxels for each grid.
Note
This property is part of the
fvdb.JaggedTensorstructure. The offsets define the boundaries between grids in a flattened voxel array.- Returns:
joffsets (torch.Tensor) – A
(grid_count + 1,)-shaped integer tensor wherejoffsets[i]is the starting index of voxels for gridiin a flattened array, andjoffsets[i+1] - joffsets[i]is the number of voxels in gridi.
- marching_cubes(field: JaggedTensor, level: float = 0.0) tuple[JaggedTensor, JaggedTensor, JaggedTensor][source]
Extract isosurface meshes over data associated with this
GridBatchusing the marching cubes algorithm. Generates triangle meshes representing the isosurface at the specified level from a scalar field defined on the voxels.- Parameters:
field (JaggedTensor) – Scalar field values at each voxel in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, 1).level (float) – The isovalue to extract the surface at. Default is
0.0.
- Returns:
vertex_positions (JaggedTensor) – Vertex positions of the meshes. A
fvdb.JaggedTensorwith shape(batch_size, num_vertices_for_grid_b, 3).face_indices (JaggedTensor) – Triangle face indices. A
fvdb.JaggedTensorwith shape(batch_size, num_faces_for_grid_b, 3).vertex_normals (JaggedTensor) – Vertex normals (computed from gradients). A
fvdb.JaggedTensorwith shape(batch_size, num_vertices_for_grid_b, 3).
- max_pool(pool_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, data: JaggedTensor, stride: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = 0, coarse_grid: GridBatch | None = None) tuple[JaggedTensor, GridBatch][source]
Apply max pooling to the given data associated with this
GridBatchreturned as data associated with the givencoarse_gridor a newly created coarseGridBatch.Performs max pooling on the voxel data, reducing the resolution by the specified
pool_factor. Each output voxel contains the maximum of the corresponding input voxels within the pooling window. The pooling operation respects the sparse structure of thisGridBatchand the givencoarse_grid.Note
If you pass
coarse_grid = None, the returned coarse grid batch will have its voxel sizes multiplied by thepool_factorand origins adjusted accordingly.Note
This method supports backpropagation through the pooling operation.
- Parameters:
pool_factor (NumericMaxRank1) – The factor by which to downsample the grids, broadcastable to shape
(3,), integer dtypedata (JaggedTensor) – The voxel data to pool. A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, channels).stride (NumericMaxRank1) – The stride to use when pooling. If
0(default), stride equalspool_factor, broadcastable to shape(3,), integer dtypecoarse_grid (GridBatch, optional) – Pre-allocated coarse grid batch to use for output. If
None, a newGridBatchis created.
- Returns:
pooled_data (JaggedTensor) – A
fvdb.JaggedTensorcontaining the pooled voxel data with shape(batch_size, coarse_total_voxels, channels).coarse_grid (GridBatch) – A
GridBatchobject representing the coarse grid batch topology after pooling. Matches the providedcoarse_gridif given.
- merged_grid(other: GridBatch) GridBatch[source]
Return a grid batch that is the union of this grid batch with another.
Merges two grid batches by taking the union of their active voxels. The grids must have compatible dimensions and transforms.
- Parameters:
other (GridBatch) – The other grid batch to merge with.
- Returns:
merged_grid (GridBatch) – A new GridBatch containing the union of active voxels from both grids.
- morton(offset: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) JaggedTensor[source]
Return Morton codes (Z-order curve) for active voxels in this grid batch.
Morton codes use xyz bit interleaving to create a space-filling curve that preserves spatial locality. This is useful for serialization, sorting, and spatial data structures.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
JaggedTensor – A JaggedTensor of shape [num_grids, -1, 1] containing the Morton codes for each active voxel in the batch.
- morton_zyx(offset: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | None = None) JaggedTensor[source]
Return transposed Morton codes (Z-order curve) for active voxels in this grid batch.
Transposed Morton codes use zyx bit interleaving to create a space-filling curve. This variant can provide better spatial locality for certain access patterns.
- Parameters:
offset – Optional offset to apply to voxel coordinates before encoding. If None, uses the negative minimum coordinate across all voxels.
- Returns:
JaggedTensor – A JaggedTensor of shape [num_grids, -1, 1] containing the transposed Morton codes for each active voxel in the batch.
- neighbor_indexes(ijk: JaggedTensor, extent: int, bitshift: int = 0) JaggedTensor[source]
Get indexes of neighboring voxels in this
GridBatchin an N-ring neighborhood of each voxel coordinate inijk.- Parameters:
ijk (JaggedTensor) – Voxel coordinates to find neighbors for. Shape:
(batch_size, num_queries_for_grid_b, 3)with integer coordinates.extent (int) – Size of the neighborhood ring (N-ring).
bitshift (int) – Bit shift value for encoding. Default is 0.
- Returns:
neighbor_indexes (JaggedTensor) – A
fvdb.JaggedTensorwith shape(batch_size, num_queries_for_grid_b, N)containing the linear indexes of neighboring voxels for each voxel coordinate inijkin the input. If some neighbors are not active in the grid, their indexes will be-1.
- property num_bytes: Tensor
The size in bytes each grid in this
GridBatchoccupies in memory.- Returns:
num_bytes (torch.Tensor) – A
(grid_count,)-shaped tensor containing the size in bytes of each grid.
- property num_leaf_nodes: Tensor
The number of leaf nodes in the NanoVDB for each grid in this
GridBatch.- Returns:
num_leaf_nodes (torch.Tensor) – A
(grid_count,)-shaped tensor containing the number of leaf nodes in each grid.
- property num_voxels: Tensor
The number of active voxels in each grid of this
GridBatch.- Returns:
num_voxels (torch.Tensor) – A
(grid_count,)-shaped tensor containing the number of active voxels in each grid.
- num_voxels_at(bi: int) int[source]
Get the number of active voxels in a specific grid.
- Parameters:
bi (int) – The batch index of the grid.
- Returns:
num_voxels (int) – Number of active voxels in the specified grid.
- origin_at(bi: int) Tensor[source]
Get the world-space origin of a specific grid.
- Parameters:
bi (int) – The batch index of the grid.
- Returns:
origin (torch.Tensor) – The origin coordinates in world space. Shape:
(3,).
- property origins: Tensor
The world-space origin of each grid. The origin is the center of the
[0,0,0]voxel.- Returns:
origins (torch.Tensor) – A
(grid_count, 3)-shaped tensor of origins.
- points_in_grid(points: JaggedTensor) JaggedTensor[source]
Check if world-space points are located within active voxels.
Tests whether the given points fall within voxels that are active in the grid.
- Parameters:
points (JaggedTensor) – World-space points to test. Shape:
(batch_size, num_points_for_grid_b, 3).- Returns:
mask (JaggedTensor) – Boolean mask indicating which points are in active voxels. Shape:
(batch_size, num_points_for_grid_b,).
- pruned_grid(mask: JaggedTensor) GridBatch[source]
Return a pruned grid based on a boolean mask.
Creates a new grid containing only the voxels where the mask is True.
- Parameters:
mask (JaggedTensor) – Boolean mask for each voxel. Shape:
(batch_size, total_voxels,).- Returns:
pruned_grid (GridBatch) – A new GridBatch containing only voxels where mask is True.
- ray_implicit_intersection(ray_origins: JaggedTensor, ray_directions: JaggedTensor, grid_scalars: JaggedTensor, eps: float = 0.0) JaggedTensor[source]
Find ray intersections with implicit surface defined by grid scalars.
Computes intersection points between rays and an implicit surface defined by scalar values stored in the grid voxels (e.g., signed distance function).
- Parameters:
ray_origins (JaggedTensor) – Starting points of rays in world space. Shape:
(batch_size, num_rays_for_grid_b, 3).ray_directions (JaggedTensor) – Direction vectors of rays. Shape:
(batch_size, num_rays_for_grid_b, 3). Should be normalized.grid_scalars (JaggedTensor) – Scalar field values at each voxel. Shape:
(batch_size, total_voxels, 1).eps (float) – Epsilon value for numerical stability. Default is 0.0.
- Returns:
intersections (JaggedTensor) – Intersection information for each ray.
- rays_intersect_voxels(ray_origins: JaggedTensor, ray_directions: JaggedTensor, eps: float = 0.0) JaggedTensor[source]
Return a boolean JaggedTensor recording whether a set of rays hit any voxels in this gridbatch.
- Parameters:
ray_origins (JaggedTensor) – A JaggedTensor of ray origins (one set of rays per grid in the batch). _i.e._ a JaggedTensor of the form [ray_o0, …, ray_oB] where ray_oI has shape [N_I, 3].
ray_directions (JaggedTensor) – A JaggedTensor of ray directions (one set of rays per grid in the batch). _i.e._ a JaggedTensor of the form [ray_d0, …, ray_dB] where ray_dI has shape [N_I, 3].
eps (float) – Epsilon value to skip intersections whose length is less than this value for numerical stability. Default is 0.0.
- Returns:
hit_mask (JaggedTensor) – A
fvdb.JaggedTensorindicating whether each ray hit a voxel. i.e. a booleanfvdb.JaggedTensorof the form[hit_0, ..., hit_B]wherehit_Ihas shape[N_I].
- refine(subdiv_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, data: JaggedTensor, mask: JaggedTensor | None = None, fine_grid: GridBatch | None = None) tuple[JaggedTensor, GridBatch][source]
Refine data associated with this
GridBatchinto higher-resolution grids by subdividing each voxel. i.e. for each voxel,(i, j, k)in each grid of thisGridBatch, copy the data associated with that voxel to the voxels(subdiv_factor[0]*i + di, subdiv_factor[1]*j + dj, subdiv_factor[2]*k + dk)fordi, dj, dkin{0, ..., subdiv_factor - 1}in the output data associated withfine_grid, if that voxel exists in the fine grid.Note
If you pass
fine_grid = None, this method will create a new fineGridBatchwith its voxel sizes divided by thesubdiv_factorand origins adjusted accordingly.Note
You can skip copying data at certain voxels in this
GridBatchby passing a booleanmask. Only data at voxels corresponding toTruevalues in the mask will be refined.Note
This method supports backpropagation through the refinement operation.
See also
refined_grid()for obtaining a refined version of the grid structure without refining data.- Parameters:
subdiv_factor (NumericMaxRank1) – Refinement factor between this
GridBatchand the fine grid batch, broadcastable to shape(3,), integer dtypedata (JaggedTensor) – Voxel data to refine. A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, channels).mask (JaggedTensor, optional) – Boolean mask indicating which voxels in the input grids to refine. If
None, data associated with all input voxels are refined.fine_grid (GridBatch, optional) – Pre-allocated fine
GridBatchto use for output. IfNone, a newGridBatchis created.
- Returns:
refined_data (JaggedTensor) – The refined data as a
fvdb.JaggedTensorfine_grid (GridBatch) – The fine
GridBatchcontaining the refined structure
- refined_grid(subdiv_factor: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, mask: JaggedTensor | None = None) GridBatch[source]
Return a refined version of this
GridBatch. i.e. each voxel in each grid is subdivided by the specifiedsubdiv_factorto create higher-resolution grids.Note
You can skip refining certain voxels in this
GridBatchby passing a booleanmask. Only voxels corresponding toTruevalues in the mask will be refined.- Parameters:
subdiv_factor (NumericMaxRank1) – Factor by which to refine each grid in the batch, broadcastable to shape
(3,), integer dtypemask (JaggedTensor, optional) – Boolean mask indicating which voxels to refine. If
None, all voxels are refined.
- Returns:
refined_grid (GridBatch) – A new
GridBatchwith refined structure.
- sample_bezier(points: JaggedTensor, voxel_data: JaggedTensor) JaggedTensor[source]
Sample data in a
fvdb.JaggedTensorassociated with thisGridBatchat world-space points using Bézier interpolation.This method uses Bézier interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers.
Note
This method supports backpropagation through the interpolation operation.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grids return zero.
See also
sample_trilinear()for trilinear interpolation.See also
sample_bezier_with_grad()for Bézier interpolation which also returns spatial gradients.- Parameters:
points (JaggedTensor) – World-space points to sample at. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).voxel_data (JaggedTensor) – Data associated with each voxel in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- Returns:
interpolated_data (JaggedTensor) – Interpolated data at each point. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, channels*).
- sample_bezier_with_grad(points: JaggedTensor, voxel_data: JaggedTensor) tuple[JaggedTensor, JaggedTensor][source]
Sample data in a
fvdb.JaggedTensorassociated with thisGridBatchat world-space points using Bézier interpolation, and return the sampled values and their spatial gradients at those points.This method uses Bézier interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers. It returns both the interpolated data and the gradients of the interpolated data with respect to the world coordinates.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grids return zero.
Note
This method supports backpropagation through the interpolation operation.
See also
sample_bezier()for Bézier interpolation without gradients.See also
sample_trilinear_with_grad()for trilinear interpolation with spatial gradients.- Parameters:
points (JaggedTensor) – World-space points to sample at. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).voxel_data (JaggedTensor) – Data associated with each voxel in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- Returns:
interpolated_data (JaggedTensor) – Interpolated data at each point. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, channels*).interpolation_gradients (JaggedTensor) – Gradients of the interpolated data with respect to world coordinates. This is the spatial gradient of the Bézier interpolation at each point. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3, channels*).
- sample_trilinear(points: JaggedTensor, voxel_data: JaggedTensor) JaggedTensor[source]
Sample data in a
fvdb.JaggedTensorassociated with thisGridBatchat world-space points using trilinear interpolation.This method uses trilinear interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers.
Note
This method supports backpropagation through the interpolation operation.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grids return zero.
See also
sample_bezier()for Bézier interpolation.See also
sample_trilinear_with_grad()for trilinear interpolation which also returns spatial gradients.- Parameters:
points (JaggedTensor) – World-space points to sample at. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).voxel_data (JaggedTensor) – Data associated with each voxel in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- Returns:
interpolated_data (JaggedTensor) – Interpolated data at each point. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, channels*).
- sample_trilinear_with_grad(points: JaggedTensor, voxel_data: JaggedTensor) tuple[JaggedTensor, JaggedTensor][source]
Sample data in a
fvdb.JaggedTensorassociated with thisGridBatchat world-space points using trilinear interpolation, and return the sampled values and their spatial gradients at those points.This method uses trilinear interpolation to interpolate data values at arbitrary continuous positions in world space, based on values defined at voxel centers. It returns both the interpolated data and the gradients of the interpolated data with respect to the world coordinates.
Note
This method assumes that the voxel data is defined at the centers of voxels. Samples outside the grids return zero.
Note
This method supports backpropagation through the interpolation operation.
See also
sample_trilinear()for trilinear interpolation without gradients.See also
sample_bezier_with_grad()for Bézier interpolation with spatial gradients.- Parameters:
points (JaggedTensor) – World-space points to sample at. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).voxel_data (JaggedTensor) – Data associated with each voxel in this
GridBatch. Afvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- Returns:
interpolated_data (JaggedTensor) – Interpolated data at each point. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, channels*).interpolation_gradients (JaggedTensor) – Gradients of the interpolated data with respect to world coordinates. This is the spatial gradient of the trilinear interpolation at each point. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3, channels*).
- save_nanovdb(path: str, data: JaggedTensor | None = None, names: list[str] | str | None = None, name: str | None = None, compressed: bool = False, verbose: bool = False) None[source]
Save a grid batch and optional voxel data to a .nvdb file.
Saves sparse grids in the NanoVDB format, which can be loaded by other applications that support OpenVDB/NanoVDB.
- Parameters:
path (str) – The file path to save to. Should have .nvdb extension.
data (JaggedTensor | None) – Voxel data to save with the grids. Shape:
(batch_size, total_voxels, channels). IfNone, only grid structure is saved.names (list[str] | str | None) – Names for each grid in the batch. If a single string, it’s used as the name for all grids.
name (str | None) – Alternative way to specify a single name for all grids. Takes precedence over names parameter.
compressed (bool) – Whether to compress the data using Blosc compression. Default is False.
verbose (bool) – Whether to print information about the saved grids. Default is False.
Note
The parameters ‘names’ and ‘name’ are mutually exclusive ways to specify grid names. Use ‘name’ for a single name applied to all grids, or ‘names’ for individual names per grid.
- segments_along_rays(ray_origins: JaggedTensor, ray_directions: JaggedTensor, max_segments: int, eps: float = 0.0) JaggedTensor[source]
Enumerate segments along rays.
- Parameters:
ray_origins (JaggedTensor) – Origin of each ray. Shape:
(batch_size, num_rays_for_grid_b, 3).ray_directions (JaggedTensor) – Direction of each ray. Shape:
(batch_size, num_rays_for_grid_b, 3).max_segments (int) – Maximum number of segments to enumerate.
eps (float) – Small epsilon value to avoid numerical issues.
- Returns:
ray_segments (JaggedTensor) – A JaggedTensor containing the samples along the rays with lshape
[[S_{0,0}, ..., S_{0,N_0}], ..., [S_{B,0}, ..., S_{B,N_B}]]and eshape(2,)representing the start and end distance of each segment.
- sparse_conv_halo(input: JaggedTensor, weight: Tensor, variant: int = 8) JaggedTensor[source]
Perform sparse convolution with halo exchange optimization.
Applies sparse convolution using halo exchange to efficiently handle boundary conditions in distributed or multi-block sparse grids.
- Parameters:
input (JaggedTensor) – Input features for each voxel. Shape:
(batch_size, total_voxels, in_channels).weight (torch.Tensor) – Convolution weights.
variant (int) – Variant of the halo implementation to use. Currently
8and64are supported. Default is8.
- Returns:
output (JaggedTensor) – Output features after convolution.
Note
Currently only 3x3x3 kernels are supported.
- splat_bezier(points: JaggedTensor, points_data: JaggedTensor) JaggedTensor[source]
Splat data at a set of input points into a
fvdb.JaggedTensorassociated with thisGridBatchusing Bézier interpolation. i.e. each point distributes its data to the surrounding voxels using cubic Bézier interpolation weights.Note
This method assumes that the voxel data is defined at the centers of voxels.
Note
This method supports backpropagation through the splatting operation.
- Parameters:
points (JaggedTensor) – World-space positions of points used to splat data. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).points_data (JaggedTensor) – Data associated with each point to splat into the grids. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, channels*).
- Returns:
splatted_features (JaggedTensor) – Accumulated features at each voxel after splatting. A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- splat_trilinear(points: JaggedTensor, points_data: JaggedTensor) JaggedTensor[source]
Splat data at a set of input points into a
fvdb.JaggedTensorassociated with thisGridBatchusing trilinear interpolation. i.e. each point distributes its data to the surrounding voxels using trilinear interpolation weights.Note
This method assumes that the voxel data is defined at the centers of voxels.
Note
This method supports backpropagation through the splatting operation.
- Parameters:
points (JaggedTensor) – World-space positions of points used to splat data. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).points_data (JaggedTensor) – Data associated with each point to splat into the grids. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, channels*).
- Returns:
splatted_features (JaggedTensor) – Accumulated features at each voxel after splatting. A
fvdb.JaggedTensorwith shape(batch_size, total_voxels, channels*).
- to(target: str | device | Tensor | JaggedTensor | GridBatch) GridBatch[source]
Move grid batch to a target device or match device of target object.
- Parameters:
target – Target to determine device. Can be: - str: Device string (e.g., “cuda”, “cpu”) - torch.device: PyTorch device object - torch.Tensor: Match device of this tensor - JaggedTensor: Match device of this JaggedTensor - GridBatch: Match device of this GridBatch
- Returns:
grid_batch (GridBatch) – A new GridBatch on the target device.
- property total_bbox: Tensor
The voxel-space bounding box that encompasses all grids in this
GridBatch.Note
The bounding box is inclusive of the minimum voxel and the maximum voxel across all grids.
- Returns:
total_bbox (torch.Tensor) – A
(2, 3)-shaped tensor representing the minimum and maximum voxel indices of the bounding box that encompasses all grids in the batch. If all grids have zero voxels, returns a zero tensor.
- property total_bytes: int
The total size in bytes all grids in this
GridBatchoccupy in memory.- Returns:
total_bytes (int) – The total size in bytes of all grids in the batch.
- property total_leaf_nodes: int
The total number of leaf nodes in the NanoVDB across all grids in this
GridBatch.- Returns:
total_leaf_nodes (int) – The total number of leaf nodes across all grids.
- property total_voxels: int
The total number of active voxels across all grids in the batch.
- Returns:
total_voxels (int) – Total active voxel count.
- uniform_ray_samples(ray_origins: JaggedTensor, ray_directions: JaggedTensor, t_min: JaggedTensor, t_max: JaggedTensor, step_size: float, cone_angle: float = 0.0, include_end_segments: bool = True, return_midpoints: bool = False, eps: float = 0.0) JaggedTensor[source]
Generate uniformly spaced samples along rays intersecting the grids.
Creates sample points at regular intervals along rays, but only for segments that intersect with active voxels. Useful for volume rendering and ray marching.
- Parameters:
ray_origins (JaggedTensor) – Starting points of rays in world space. Shape:
(batch_size, num_rays_for_grid_b, 3).ray_directions (JaggedTensor) – Direction vectors of rays (should be normalized). Shape:
(batch_size, num_rays_for_grid_b, 3).t_min (JaggedTensor) – Minimum distance along rays to start sampling. Shape:
(batch_size, num_rays_for_grid_b).t_max (JaggedTensor) – Maximum distance along rays to stop sampling. Shape:
(batch_size, num_rays_for_grid_b).step_size (float) – Distance between samples along each ray.
cone_angle (float) – Cone angle for cone tracing (in radians). Default is 0.0.
include_end_segments (bool) – Whether to include partial segments at ray ends. Default is True.
return_midpoints (bool) – Whether to return segment midpoints instead of start points. Default is False.
eps (float) – Epsilon value for numerical stability. Default is 0.0.
- Returns:
ray_samples (JaggedTensor) – Ray samples containing the samples along the rays. A
fvdb.JaggedTensorwith lshape[[S_{0,0}, ..., S_{0,N_0}], ..., [S_{B,0}, ..., S_{B,N_B}]]and eshape(2,)or(1,)representing the start and end distance of each sample or the midpoint of each sample ifreturn_midpointsisTrue.
- voxel_size_at(bi: int) Tensor[source]
Get voxel size at a specific grid index.
- Parameters:
bi (int) – Grid index.
- Returns:
voxel_size (torch.Tensor) – Voxel size at the specified grid index. Shape:
(3,).
- property voxel_sizes: Tensor
The world-space voxel size of each grid in the batch.
- Returns:
voxel_sizes (torch.Tensor) – A
(grid_count, 3)-shaped tensor of voxel sizes.
- voxel_to_world(ijk: JaggedTensor) JaggedTensor[source]
Transform a set of voxel-space coordinates to their corresponding positions in world space using each grid’s origin and voxel size.
See also
world_to_voxel()for the inverse transformation, andvoxel_to_world_matricesandworld_to_voxel_matricesfor the actual transformation matrices.- Parameters:
ijk (JaggedTensor) – A
fvdb.JaggedTensorof coordinates to convert. Shape:(batch_size, num_points_for_grid_b, 3). Can be fractional for interpolation.- Returns:
world_coords (JaggedTensor) – World coordinates. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3).
- property voxel_to_world_matrices: Tensor
The voxel-to-world transformation matrices for each grid in this
GridBatch, which transform voxel space coordinates to world space coordinates.- Returns:
voxel_to_world_matrices (torch.Tensor) – A
(grid_count, 4, 4)-shaped tensor where each(4, 4)matrix represents the voxel-to-world transformation for a grid.
- voxels_along_rays(ray_origins: JaggedTensor, ray_directions: JaggedTensor, max_voxels: int, eps: float = 0.0, return_ijk: bool = True, cumulative: bool = False) tuple[JaggedTensor, JaggedTensor][source]
Enumerate voxels intersected by rays.
Finds all active voxels that are intersected by the given rays using a DDA (Digital Differential Analyzer) algorithm.
- Parameters:
ray_origins (JaggedTensor) – Starting points of rays in world space. Shape:
(batch_size, num_rays_for_grid_b, 3).ray_directions (JaggedTensor) – Direction vectors of rays (should be normalized). Shape:
(batch_size, num_rays_for_grid_b, 3).max_voxels (int) – Maximum number of voxels to return per ray.
eps (float) – Epsilon value for numerical stability. Default is 0.0.
return_ijk (bool) – Whether to return voxel indices. If False, returns linear indices instead. Default is True.
cumulative (bool) – Whether to return cumulative indices across the batch. Default is False.
- Returns:
voxels (JaggedTensor) – A JaggedTensor with lshape
[[V_{0,0}, ..., V_{0,N_0}], ..., [V_{B,0}, ..., V_{B,N_B}]]and eshape(3,)or(,)containing the ijk coordinates or indices of the voxels intersected by the rays.times (JaggedTensor) – A JaggedTensor with lshape
[[T_{0,0}, ..., T_{0,N_0}], ..., [T_{B,0}, ..., T_{B,N_B}]]and eshape(2,)containing the entry and exit distance along the ray of each voxel.
- world_to_voxel(points: JaggedTensor) JaggedTensor[source]
Convert world-space coordinates to voxel-space coordinates using each grid’s transform.
Note
This method supports backpropagation through the transformation operation.
See also
voxel_to_world()for the inverse transformation, andvoxel_to_world_matricesandworld_to_voxel_matricesfor the actual transformation matrices.- Parameters:
points (JaggedTensor) – Per-grid world-space positions to convert. Shape:
(batch_size, num_points_for_grid_b, 3).- Returns:
voxel_points (JaggedTensor) – Grid coordinates. A
fvdb.JaggedTensorwith shape(batch_size, num_points_for_grid_b, 3). Can contain fractional values.
- property world_to_voxel_matrices: Tensor
The world-to-voxel transformation matrices for each grid in this
GridBatch, which transform world space coordinates to voxel space coordinates.- Returns:
world_to_voxel_matrices (torch.Tensor) – A
(grid_count, 4, 4)-shaped tensor where each(4, 4)matrix represents the world-to-voxel transformation for a grid.