Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Virtual KITTI 2 (converted to VLBM format)
This dataset contains all 50 sequences (5 scenes x 10 variations) from the Virtual KITTI 2 dataset converted to the VLBM/Flock4D-compatible format using the conversion tool preprocess_vkitti2.py.
Scale
Virtual KITTI 2 provides 5 scenes x 10 variations = 50 sequences total. All 50 sequences are included in this dataset.
| Scene | Frames | Variations | Description |
|---|---|---|---|
| Scene01 | 447 | 10 | Urban driving |
| Scene02 | 233 | 10 | Urban driving |
| Scene06 | 270 | 10 | Urban driving |
| Scene18 | 339 | 10 | Urban driving |
| Scene20 | 837 | 10 | Urban driving |
Variations: 15-deg-left, 15-deg-right, 30-deg-left, 30-deg-right, clone, fog, morning, overcast, rain, sunset.
Trajectory Generation
Virtual KITTI 2 does not provide ground-truth point trajectories. Instead, sparse 3D/2D trajectories biased toward vehicle targets are generated from forward scene flow and dense depth maps via preprocess_vkitti2.py:
Method: Vehicle-Biased Sampling with Scene Flow Chaining
Vehicle-biased query point sampling: At initialization frames (every 50 frames), sample points using a two-tier grid strategy:
- Fine grid (step=4 pixels) applied only to vehicle pixel regions (Car/Truck/Van detected via class segmentation)
- Coarse grid (step=16 pixels) applied only to background regions
- This reduces trajectory density by ~86% but increases vehicle-to-background ratio from 19% β 79%
Unproject to 3D: Use depth maps and camera intrinsics to lift 2D grid points to 3D camera space.
Chain forward scene flow: For each timestep, look up scene flow at the current 2D position (bilinear interpolation) and apply:
P_{t+1}_cam_{t+1} = P_t_cam_t + SF(u, v, t).Project to 2D: Project the new 3D point to 2D using the next frame's intrinsics.
Visibility determination: A point remains visible if it is (a) within image bounds, (b) has positive depth, and (c) passes the occlusion test (predicted depth vs. depth map agree within 10% or 0.5m absolute).
Depth re-anchoring: After each step, re-read the depth value from the depth map at the projected 2D position to prevent drift accumulation.
Multi-frame initialization: New trajectory batches are initialized every 50 frames to maintain coverage throughout the sequence.
World coordinates: All 3D trajectory points are stored in world coordinates by applying the inverse of the world-to-camera extrinsic matrix.
Accuracy
- Scene flow chaining gives sub-centimeter accuracy per step on this synthetic dataset (verified by comparing predicted depth at t+1 with ground-truth depth map).
- The stored float16 annotations introduce ~2 pixel reprojection error due to quantization of world-space coordinates (which can be ~100m, exceeding float16 precision). Float32 accuracy is sub-pixel.
Dataset Structure
Each sequence directory follows this layout:
{Scene}_{variation}/
βββ rgbs/
β βββ rgb_00000.jpg
β βββ rgb_00001.jpg
β βββ ...
βββ depths/
β βββ depth_00000.npz
β βββ depth_00001.npz
β βββ ...
βββ intrinsics.npy
βββ extrinsics.npy
βββ trajs_2d.npy
βββ trajs_3d.npy
βββ visibilities.npy
βββ scene_info.json
File Descriptions
rgbs/: RGB frames from the left camera saved as JPEG (rgb_XXXXX.jpg). Resolution is 1242x375 pixels.depths/: Dense depth maps saved as compressed NumPy archives (depth_XXXXX.npz). Each archive stores a float16 array under the keydepthof shape(H, W)in meters. Original VKITTI2 depth (uint16, 1px=1cm) converted to float16 meters.intrinsics.npy: Camera intrinsic matrices for each frame(T, 3, 3).extrinsics.npy: World-to-camera extrinsic matrices (W2C) for each frame(T, 4, 4).trajs_2d.npy: 2D trajectories(T, N, 2)-- pixel coordinates (x, y).trajs_3d.npy: 3D trajectories(T, N, 3)-- world-space coordinates (x, y, z); zero-filled where invisible.visibilities.npy: Visibility flags(T, N)(1.0 visible, 0.0 not visible).scene_info.json: JSON file with per-sequence metadata. Fields:source,scene,variation,num_frames,num_trajectories,image_size,depth_range,depth_type,grid_step,bg_grid_step,reinit_every,num_batches,avg_visible_per_frame,avg_track_length,trajectory_method.
Dataset Statistics
Overall
| Metric | Value |
|---|---|
| Total sequences | 50 |
| Total frames | 21,260 |
| Total trajectories | 1,528,124 |
| Dataset size | 8.2 GB |
| Image resolution | 1242 x 375 px |
| Depth range | [1.08, 599.99] meters |
| Depth type | Dense (ground truth) |
| Sampling strategy | Vehicle-biased (fine grid on vehicles, coarse grid on background) |
Per-Scene Summary
| Scene | Variations | Frames | Avg Trajectories | Avg Visible/Frame | Avg Track Length |
|---|---|---|---|---|---|
| Scene01 | 10 | 447 | 38,666 | 806 | 9.3 |
| Scene02 | 10 | 233 | 14,484 | 1,204 | 19.5 |
| Scene06 | 10 | 270 | 13,799 | 2,361 | 46.7 |
| Scene18 | 10 | 339 | 20,552 | 1,921 | 31.3 |
| Scene20 | 10 | 837 | 65,312 | 1,774 | 23.6 |
Per-Sequence Average
| Metric | Value |
|---|---|
| Frames per sequence | 425 (range: 233--837) |
| Trajectories per sequence | 30,562 (range: 13,799--65,312) |
| Avg visible points per frame | 1,613 |
| Avg track length | 26.1 frames |
Conversion Parameters
| Parameter | Value |
|---|---|
| Fine grid step (vehicles) | 4 pixels |
| Coarse grid step (background) | 16 pixels |
| Reinit every | 50 frames |
| Max depth | 600.0 m |
| Min depth | 0.5 m |
| Occlusion threshold | 10% relative or 0.5m absolute |
| Trajectory method | Scene flow chaining with depth re-anchor, vehicle-biased multi-grid sampling |
| Class detection | RGB color matching (Car/Truck/Van from class segmentation) |
Data Specifications
- Image format: JPEG (RGB), 1242x375 px
- Depth format: NPZ (float16), dense (ground truth from VKITTI2)
- Annotation format: Individual
.npyfiles (float16 arrays for compact storage) - Coordinate system: x=right, y=down, z=forward (camera space)
- Extrinsics: World-to-camera (W2C) 4x4 matrices
Key Characteristics: Vehicle-Biased Sampling
This dataset prioritizes trajectories on vehicle targets over background:
Sampling Strategy:
- Vehicle pixels (Car/Truck/Van) β fine 4-pixel grid: dense coverage of targets
- Background pixels β coarse 16-pixel grid: sparser environmental context
- Result: 79% of trajectories are from vehicle regions (vs. 19% in uniform dense sampling)
Advantages:
- Target-centric: Most trajectories track actual driving objects
- Sparser annotations: 14% of original dense data (~1.5M vs 10.8M trajectories), 13% smaller dataset
- Improved quality: Longer average track length (26.1 vs 24.3 frames) due to vehicle motion consistency
- Faster processing: 29% reduction in conversion time (15 min vs 21 min for 50 sequences)
- Discrete rather than dense: Suitable for object-centric learning tasks (e.g., vehicle tracking, motion forecasting)
Usage Example (Python)
import numpy as np
from PIL import Image
from pathlib import Path
import json
seq_dir = Path("data/vkitti2_vlbm/Scene01_clone")
# Load annotations
trajs_2d = np.load(seq_dir / "trajs_2d.npy") # (T, N, 2)
trajs_3d = np.load(seq_dir / "trajs_3d.npy") # (T, N, 3)
vis = np.load(seq_dir / "visibilities.npy") # (T, N)
intrinsics = np.load(seq_dir / "intrinsics.npy") # (T, 3, 3)
extrinsics = np.load(seq_dir / "extrinsics.npy") # (T, 4, 4)
# Load an image and depth map
frame_idx = 0
rgb = Image.open(seq_dir / "rgbs" / f"rgb_{frame_idx:05d}.jpg")
depth_npz = np.load(seq_dir / "depths" / f"depth_{frame_idx:05d}.npz")
depth = depth_npz['depth'] # float16 array (H, W)
# Load scene info
with open(seq_dir / "scene_info.json", 'r') as f:
scene_info = json.load(f)
print(scene_info)
Citation
Please cite the original Virtual KITTI 2 dataset when using the converted data:
@misc{cabon2020vkitti2,
title={Virtual KITTI 2},
author={Cabon, Yohann and Murray, Naila and Humenberger, Martin},
year={2020},
eprint={2001.10773},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
- Downloads last month
- 65