Visualization

class fvdb.viz.CamerasView(scene_name: str, name: str, camera_to_world_matrices: Tensor, projection_matrices: Tensor, image_sizes: Tensor, axis_length: float, axis_thickness: float, frustum_line_width: float, frustum_scale: float, frustum_color: tuple[float, float, float], frustum_near_plane: float, frustum_far_plane: float, enabled: bool, _private: Any = None)[source]

A view for a set of camera frusta and axes in a fvdb.viz.Scene with parameters to adjust how the cameras are rendered.

Each camera is represented by its camera-to-world and projection matrices, and drawn as a wireframe frustum with orthogonal axes at the camera’s origin.

property axis_length: float

Get the length of the axes drawn at each camera origin in world units.

Returns:

length (float) – The length of the axes.

property axis_thickness: float

Get the thickness of the axes drawn at each camera origin in pixel units.

Returns:

thickness (float) – The thickness of the axes.

property enabled: bool

Return whether the camera frusta and axes are shown in the scene.

Returns:

enabled (bool) – True if the camera frusta and axes are shown in the scene, False otherwise.

property frustum_color: Tensor

Get the RGB color of the frustum lines as a tensor of shape (3,) with values in [0, 1].

Returns:

torch.Tensor – The RGB color of the frustum lines.

property frustum_line_width: float

Get the line width of the frustum in the camera frustum view.

property frustum_scale: float

Get the scale factor applied to the frustum visualization. Each frustum will have its size multiplied by this scale factor when rendered.

E.g. if the frustum has near = 0.1, and far = 1.0, then setting the frustum scale to 2.0 will render the frustum as if near = 0.2 and far = 2.0.

Returns:

scale (float) – The scale factor applied to the frustum visualization.

class fvdb.viz.GaussianSplat3dView(scene_name: str, name: str, gaussian_splat_3d: GaussianSplat3d, tile_size: int = 16, min_radius_2d: float = 0.0, eps_2d: float = 0.3, antialias: bool = False, sh_degree_to_use: int = -1, sh_ordering_mode: ShOrderingMode = ShOrderingMode.RGB_RGB_RGB, _private: Any = None)[source]
property eps_2d: float

Get the 2D epsilon value used for rendering splats.

Returns:

float – The 2D epsilon value.

property min_radius_2d: float

Get the minimum radius in pixels below which splats will not be rendered.

Returns:

float – The minimum radius in pixels.

property sh_degree_to_use: int

Get the degree of spherical harmonics to use when rendering colors.

Returns:

int – The degree of spherical harmonics to use.

property sh_ordering_mode: ShOrderingMode

Get the spherical harmonics ordering mode used for rendering colors.

Returns:

ShOrderingMode – The spherical harmonics ordering mode.

property tile_size: int

Set the 2D tile size to use when rendering splats. Larger tiles can improve performance, but may exhaust shared memory usage on the GPU. In general, tile sizes of 8, 16, or 32 are recommended.

Returns:

int – The current tile size.

class fvdb.viz.PointCloudView(scene_name: str, name: str, positions: Tensor, colors: Tensor, point_size: float, _private: Any = None)[source]
property point_size: float

Get the size (in pixels) of points when rendering.

Returns:

size (float) – The current point size.

class fvdb.viz.Scene(name: str)[source]
add_cameras(name: str, camera_to_world_matrices: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] | Sequence[Sequence[Sequence[int | float | integer | floating]]], projection_matrices: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] | Sequence[Sequence[Sequence[int | float | integer | floating]]], image_sizes: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]] | None = None, axis_length: float = 0.3, axis_thickness: float = 2.0, frustum_line_width: float = 2.0, frustum_scale: float = 1.0, frustum_color: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = (0.5, 0.8, 0.3), frustum_near_plane: float = 0, frustum_far_plane: float = 0.5, enabled: bool = True) CamerasView[source]

Add CamerasView to this Scene and return the added camera view.

Parameters:
  • name (str) – The name of the camera view.

  • camera_to_world_matrices (NumericMaxRank3) – The 4x4 camera to world transformation matrices (one per camera) encoded as a tensor-like object of shape (N, 4, 4) where N is the number of cameras.

  • projection_matrices (NumericMaxRank3 | None) – The 3x3 projection matrices (one per camera) encoded as a tensor-like object of shape (N, 3, 3) where N is the number of cameras. If None, it will use the projection matrix of the scene’s main camera.

  • image_sizes (NumericMaxRank2 | None) – The image sizes as a tensor of shape (N, 2) where N is the number of cameras. such that height_i, width_i = image_sizes[i] is the resolution of the i-th camera. If None, the image sizes will be inferred from the projection matrices assuming square pixels and that the principal point is at the center of the image.

  • axis_length (float) – The length of the axis lines in the camera frustum view.

  • axis_thickness (float) – The thickness (in world coordinates) of the axis lines in the camera frustum view.

  • frustum_line_width (float) – The width (in pixels) of the frustum lines in the camera frustum view.

  • frustum_scale (float) – The scale factor for the frustum size in the camera frustum view.

  • frustum_color (NumericMaxRank1) – The color of the frustum lines as a sequence of three floats (R, G, B) in the range [0, 1].

  • frustum_near_plane (float) – The near clipping plane distance for the frustum in the camera frustum view.

  • frustum_far_plane (float) – The far clipping plane distance for the frustum in the camera frustum view.

  • enabled (bool) – If True, the camera view UI is enabled and the cameras will be rendered. If False, the camera view UI is disabled and the cameras will not be rendered.

add_gaussian_splat_3d(name: str, gaussian_splat_3d: GaussianSplat3d, tile_size: int = 16, min_radius_2d: float = 0.0, eps_2d: float = 0.3, antialias: bool = False, sh_degree_to_use: int = -1) GaussianSplat3dView[source]

Add a fvdb.GaussianSplat3d to the viewer and return a view for it.

Parameters:
  • name (str) – The name of the Gaussian splat 3D scene. This must be unique among all scenes added to the viewer.

  • gaussian_splat_3d (GaussianSplat3d) – The Gaussian splat 3D scene to add.

  • tile_size (int) – The tile size to use for rendering. Default is 16.

  • min_radius_2d (float) – The minimum radius in pixels to use when rendering splats. Default is 0.0.

  • eps_2d (float) – The epsilon value to use when rendering splats. Default is 0.3.

  • antialias (bool) – Whether to use antialiasing when rendering splats. Default is False.

  • sh_degree_to_use (int) – The degree of spherical harmonics to use when rendering colors. If -1, the maximum degree supported by the Gaussian splat 3D scene is used. Default is -1.

Returns:

gaussian_splat_3d_view (GaussianSplat3dView) – A view for the Gaussian splats added to the scene.

add_point_cloud(name: str, points: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]], colors: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size | Sequence[Sequence[int | float | integer | floating]], point_size: float)[source]

Add a point cloud with colors and world-space radii to the viewer and return a view for it.

Note

Colors must be in the range [0, 1]. You can pass in a single color as a tuple of 3 floats to color all points the same.

Note

You can pass in a single radius as a float to use the same radius for all points.

Parameters:
  • name (str) – The name of the point cloud added to the viewer. If a point cloud with the same name already exists in the viewer, it will be replaced.

  • points (NumericMaxRank2) – The 3D points of the point cloud as a tensor-like object of shape (N, 3) where N is the number of points.

  • colors (NumericMaxRank2) – The colors of the points as a tensor-like object of shape (N, 3) where N is the number of points. Alternatively, you can pass in a single color as a tensor-like object of shape (3,) to color all points the same.

  • point_size (float) – The screen-space size (in pixels) of the points when rendering.

Returns:

point_cloud_view (GaussianSplat3dView) – A view for the point cloud added to the scene.

property camera_far: float

Get the far clipping plane distance for rendering. Objects farther from the camera than this distance will not be rendered.

Returns:

far (float) – The far clipping plane distance.

property camera_near: float

Get the near clipping plane distance for rendering. Objects closer to the camera than this distance will not be rendered.

Returns:

near (float) – The near clipping plane distance.

property camera_orbit_center: Tensor

Return center of the camera orbit in world coordinates.

Note

The camera itself is positioned at: camera_position = orbit_center + orbit_radius * orbit_direction

Returns:

center (torch.Tensor) – A tensor of shape (3,) representing the camera orbit center in world coordinates.

property camera_orbit_direction: Tensor

Return the direction pointing from the orbit center to the camera position.

Note

The camera itself is positioned at: camera_position = orbit_center + orbit_radius * orbit_direction

Returns:

direction (torch.Tensor) – A tensor of shape (3,) representing the direction pointing from the orbit center to the camera position.

property camera_orbit_radius: float

Return the radius of the camera orbit.

Note

The camera itself is positioned at: camera_position = orbit_center + orbit_radius * orbit_direction

Returns:

radius (float) – The radius of the camera orbit.

property camera_up_direction: Tensor

Return the up vector of the camera. i.e. the direction that is considered ‘up’ in the camera’s view.

Returns:

up (torch.Tensor) – A tensor of shape (3,) representing the up vector of the camera.

reset()[source]

Reset the scene. This will reset viewer server state and clear all views in the scene.

set_camera_lookat(eye: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, center: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size, up: Tensor | ndarray | int | float | integer | floating | Sequence[int | float | integer | floating] | Size = [0.0, 1.0, 0.0])[source]

Set the camera pose from a camera origin, a lookat point, and an up direction of this scene’s camera.

Parameters:
  • eye (NumericMaxRank1) – A tensor-like object of shape (3,) representing the camera position in world coordinates.

  • center (NumericMaxRank1) – A tensor-like object of shape (3,) representing the point the camera is looking at.

  • up (NumericMaxRank1) – A tensor-like object of shape (3,) representing the up direction of the camera.

class fvdb.viz.ShOrderingMode(*values)[source]
RGB_RGB_RGB = 'rgb_rgb_rgb'
RRR_GGG_BBB = 'rrr_ggg_bbb'
fvdb.viz.get_scene(name: str = 'fVDB Scene') Scene[source]

Get a fvdb.viz.Scene by name from the viewer server. If the scene does not exist, this function creates a new scene with the given name.

Parameters:

name (str) – The name of the scene to get.

Returns:

scene (fvdb.viz.Scene) – The scene with the given name.

fvdb.viz.grid_edge_network(grid: Grid) tuple[Tensor, Tensor][source]

Return a set of line segments representing the edges of the active voxels in the grid. This can be useful for visualizing a Grid as a wireframe.

The line segments are represented by an (N, 3) tensor of vertices and an (M, 2) tensor of indices into the vertex tensor. such that each edge is defined by a pair of vertex indices, where edge_indices[j] = [v0, v1] means that the j-th edge connects vertices at positions edge_vertices[v0] and edge_vertices[v1].

Example usage:

import fvdb

# Create a grid from points
grid = fvdb.Grid.from_points(...)

# Get the edge network of the grid, defining line segments for each edge of the active voxels
edge_vertices, edge_indices = fvdb.viz.grid_edge_network(grid)

# Get the start and end position of each edge
v0 = edge_vertices[edge_indices[:, 0]] # Start position
v1 = edge_vertices[edge_indices[:, 1]] # End position
Parameters:

grid (Grid) – The Grid to extract edges from.

Returns:
  • edge_vertices (torch.Tensor) – A tensor of shape (N, 3) representing the vertices of the edges.

  • edge_indices (torch.Tensor) – A tensor of shape (M, 2) representing the indices of the vertices that form each edge. i.e. edge_indices[j] = [v0, v1] means that the j-th edge connects vertices at positions edge_vertices[v0] and edge_vertices[v1].

fvdb.viz.gridbatch_edge_network(grid: GridBatch) tuple[JaggedTensor, JaggedTensor][source]

Return a set of line segments representing the edges of the active voxels in the grid batch. This can be useful for visualizing a GridBatch as a wireframe.

The line segments are represented by a jagged tensor of vertices and a jagged tensor of indices into the vertex tensor. such that each edge is defined by a pair of vertex indices, where edge_indices[b][j] = [v0, v1] means that the j-th edge in the b-th grid connects vertices at positions edge_vertices[b][v0] and edge_vertices[b][v1].

Example usage:

import fvdb

# Create a grid batch from multiple grids
grid_batch = fvdb.GridBatch.from_grids([...])

# Get the edge network of the grid batch, defining line segments for each edge of the active voxels
edge_vertices, edge_indices = fvdb.viz.gridbatch_edge_network(grid_batch)

# Iterate over each grid in the batch, and get the start and end position of each edge
for b in range(len(grid_batch)):
    # Get the start and end position of each edge in the b-th grid
    v0 = edge_vertices[b][edge_indices[b][:, 0]] # Start position
    v1 = edge_vertices[b][edge_indices[b][:, 1]] # End position

    # ... do something with v0 and v1 ...
Parameters:

grid (GridBatch) – The GridBatch to extract edges from with B grids.

Returns:
  • edge_vertices (JaggedTensor) – A jagged tensor of shape (B, N_b, 3) representing the vertices of the edges.

  • edge_indices (JaggedTensor) – A jagged tensor of shape (B, M_b, 2) representing the indices of the vertices that form each edge. i.e. edge_indices[b][j] = [v0, v1] means that the j-th edge in the b-th grid connects vertices at positions edge_vertices[b][v0] and edge_vertices[b][v1].

fvdb.viz.init(ip_address: str = '127.0.0.1', port: int = 8080, vk_device_id: int = 0, verbose: bool = False)[source]

Initialize the viewer web-server on the given IP address and port. You must call this function first before visualizing any scenes.

Example usage:

import fvdb

# Initialize the viewer server on localhost:8080
fvdb.viz.init(ip_address="127.0.0.1", port=8080)

# Add a scene to the viewer with a point cloud in the scene
scene = fvdb.viz.Scene("My Scene")
scene.add_point_cloud(...)

# Show the viewer in the browser or inline in a Jupyter notebook
fvdb.viz.show()

Note

If the viewer server is already initialized, this function will do nothing and will print a warning message.

Parameters:
  • ip_address (str) – The IP address to bind the viewer server to. Default is "127.0.0.1".

  • port (int) – The port to bind the viewer server to. Default is 8080.

  • vk_device_id (int) – The Vulkan device ID to use for rendering. Default is 0.

  • verbose (bool) – If True, the viewer server will print verbose output to the console. Default is False.

fvdb.viz.show()[source]

Show an interactive viewer in the browser or inline in a Jupyter notebook.

Example usage:

import fvdb

# Initialize the viewer server on localhost:8080
fvdb.viz.init(ip_address="127.0.0.1", port=8080)

# Add a scene to the viewer with a point cloud in the scene
scene = fvdb.viz.Scene("My Scene")
scene.add_point_cloud(...)

# Show the viewer in the browser or inline in a Jupyter notebook
fvdb.viz.show()

Note

You must call fvdb.viz.init() before calling this function. If the viewer server is not initialized, this function will raise a RuntimeError.