Dataset Viewer
The dataset viewer is not available for this dataset.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GNN Constraint-Aware World Model Dataset (v3)

Real robot episodes with per-frame constraint graphs, SAM2 segmentation masks + 256-D feature embeddings, full 3D depth bundles, and synchronized robot states across two manipulation domains. Both domains share the v3 on-disk layout (same JSON/NPZ schemas, same delta-encoded frame_states, same fully-connected PyG expansion at load time) and now share a unified 270-D node feature format — the PyG loader reads a fixed 10-D type encoding from a YAML config so both domains produce identical node dimensionality.

  • Project: GNN world model for constraint-aware video generation
  • Author: Texas A&M University
  • Hardware: UR5e + Robotiq 2F-85 gripper, OAK-D Pro (static side view)

What's in this repo — at a glance

Where Contains Use for
session_* (Desktop) and hanoi/session_hanoi_* Raw episodes + per-frame annotations/ (masks, embeddings, depth bundles, side_graph.json) Training data for the world model
config/type_encoding_*.yaml Fixed 10-D per-type encoding YAMLs Loader inputs (pick one per run)
gnn_world_model_loader.py Self-contained PyG loader (one function per variant; also list_all_frame_graphs iterator) Reading dataset → torch_geometric.data.Data
examples/ 8 runnable scripts (one per step) — see "Full instructions" below Runnable entry points for every step of the pipeline
tools/hanoi_pipeline/ SAM2-FT checkpoint + full Python pipeline (auto-labeler, HanoiGraphInferer, materializer, src/ modules) Auto-labeling new Hanoi sessions and RGB→graph inference

Everything needed to go from a predicted RGB frame to a constraint graph is bundled here — no external repo required. The only exception is Meta's SAM2 base checkpoint (320 MB), which Step 3 below describes how to install.

Domains at a glance

Domain Graph variants offered Node vocab size Node feature dim Edge feature dim Data root
Desktop disassembly products-only, with-robot-node, with-robot-state, with-robot-action 9 (8 products + robot) 270 3 session_<date>_<time>/episode_XX/
Tower of Hanoi products-only, with-robot-state, with-robot-action 4 (ring_1..ring_4) 270 3 hanoi/session_hanoi_<date>_<time>/episode_XX/

Node feature dim = 256 (SAM2 emb) + 3 (3D pos) + 10 (fixed type encoding) + 1 (visibility) = 270. The 10-D type encoding is a fixed, deterministic per-type vector (NOT trained) read from config/type_encoding_random.yaml or config/type_encoding_clip.yaml at load time — so both domains, and any future component vocabulary up to 13 types, share the same node dimension.

Four loader variants (all return torch_geometric.data.Data):

  • load_pyg_frame_products_only — V1 bare graph: products/rings only, no robot info.
  • load_pyg_frame_with_robot — V2 ablation: robot attached as a graph NODE (Desktop only; Hanoi has no robot mask in v1, so this falls back to products-only).
  • load_pyg_frame_with_robot_stateV3 recommended: products-only graph + robot_state=[13] side-tensor. Works for both domains because robot_states.npy is present everywhere.
  • load_pyg_frame_with_robot_action — V3 action-conditioned: same as above + robot_action=[13] delta for the next frame.

The three paper options map cleanly: Option 1 (direct graph encoding) → products_only; Option 2 (encoder → latent → world model with robot context) → with_robot_state; Option 3 (action-conditioned GNN) → with_robot_action.

File layout (same for both domains)

episode_XX/
├── metadata.json            # episode metadata (domain-specific extras)
├── robot_states.npy         # (T, 13) float32 — joints + TCP + gripper
├── robot_actions.npy        # (T-1, 13) float32 — frame deltas
├── timestamps.npy           # (T, 3) float64
├── side/
│   ├── rgb/frame_XXXXXX.png     # 1280×720 RGB
│   └── depth/frame_XXXXXX.npy   # 1280×720 uint16 (mm)
├── wrist/                   # raw wrist camera (not used in v3)
└── annotations/
    ├── side_graph.json          # components, static edges, frame_states
    ├── side_masks/              # {component_id: (H,W) uint8} per frame
    ├── side_embeddings/         # {component_id: (256,) float32} per frame
    ├── side_depth_info/         # flat-keyed depth bundle per frame
    ├── side_robot/              # robot bundle per frame (visible flag)
    └── dataset_card.json        # format description

Alignment guarantee: every labeled frame index has files in all four of side_masks/, side_embeddings/, side_depth_info/, side_robot/. Files are keyed by the same integer frame index, so a loader can key off the mask directory and trust the rest to be present.

Pipeline — four stages from raw video to training-ready graphs

┌─────────────┐     ┌───────────────┐     ┌───────────────────┐     ┌──────────────────┐
│ Collection  │ →   │ Auto-labeling │ →   │ Verification / UI │ →   │  PyG loader @    │
│ (30 Hz RGBD │     │ (SAM2-FT)     │     │ (optional edit)   │     │  training time   │
│  + robot)   │     │               │     │                   │     │                  │
└─────────────┘     └───────────────┘     └───────────────────┘     └──────────────────┘
   episode_XX/         annotations/           annotations/             torch_geometric
                        masks, emb,            (corrected)              .data.Data
                        depth, robot,                                  x=[N,270], edge=[E,3]
                        side_graph.json

Stage 1 — Collection

30 Hz synchronous capture of side RGB + depth + robot state into episode_XX/. No image processing or graph work happens here.

  • Desktop: human teleop via a game controller; the operator decides what to disassemble in what order.
  • Hanoi: autonomous — scripts/hanoi/orchestrator.py pre-plans N missions upfront from the captured initial state, samples each as classical/single_ring/rearrange at 40/40/20 weights, writes metadata.json with goal_prompt, initial_state, target_state, and the deterministic solver_moves (reference action sequence from a classical Hanoi BFS solver). The UR5e executes each mission with blended waypoints and per-ring grasp offsets.

Stage 2 — Auto-labeling (SAM2 detection → graph)

Separate offline step that produces the entire annotations/ tree. Hanoi is fully automatic in v3; Desktop currently uses manual + SAM2-assisted labeling. The Hanoi auto-labeler ships inside this dataset under tools/hanoi_pipeline/ (so users cloning the dataset can reproduce or extend it):

python tools/hanoi_pipeline/scripts/hanoi/auto_label.py <session_dir>

Per-frame algorithm (Hanoi):

  1. Ring detection. HSV range + color-specific mask → largest connected blob → bbox per ring.
  2. SAM2 segmentation. Run SAM2 with (bbox + centroid point) prompt on each ring. The Hanoi-fine-tuned checkpoint is auto-loaded if present (checkpoints/sam2_hanoi_ft.pt); otherwise falls back to vanilla sam2.1_hiera_base_plus.
  3. 256-D embedding. Masked average-pool of SAM2's vision_features spatial grid over each ring mask.
  4. Depth backprojection. Masked pixels → (u, v) + depth → 3D point cloud in camera frame; centroid used as the node position.

Per-episode algorithm: 5. Grasp-interval detection. Read robot_states.npy[:, 12] (Robotiq 2F-85 gripper position, 0-255). Find the lowest stable plateau above the fully-open cutoff (baseline ≈ pre-grasp width), threshold at baseline+10, morphologically close to bridge single-frame glitches, yielding [(start, end, ring_id)] intervals — one per move. 6. Symbolic state unroll. Starting from initial_state, apply solver_moves[i] after each interval closes, marking the moved ring as held=True during the interval and recording the resulting per-frame constraints / visibility / held dicts as deltas in frame_states. No per-frame ring re-identification is needed; the move plan is ground truth.

Stage 3 — Runtime inference (RGB → graph)

The single-frame inferer (tools/hanoi_pipeline/infer_graph_from_frame.py) is what you call inside a world-model prediction loop. Given a predicted RGB (and optional depth), it returns the same graph schema Stage 2 produced — no offline pipeline needed, no temporal context required:

from infer_graph_from_frame import HanoiGraphInferer
inferer = HanoiGraphInferer()
result = inferer(predicted_rgb, depth=predicted_depth)
# result["graph"] / ["masks"] / ["embeddings"] / ["depth_info"] / ["ring_states"]

This is the path from any predicted image straight to a PyG-compatible graph — same masks, same 256-D SAM2 embeddings, same 3-D positions as the training annotations. See Step 3 below for a runnable wrapper.

SAM2 models used in this dataset

Two checkpoints are in play, both distributed by this repo under tools/hanoi_pipeline/checkpoints/ (also available from the SAM2 repo):

File Size What it contains When it's used
sam2.1_hiera_base_plus.pt (Meta AI) ~320 MB Full SAM2 model — image encoder + prompt encoder + mask decoder Loaded as the base. Frozen during fine-tuning and inference
sam2_hanoi_ft.pt (this dataset) ~16 MB Decoder + prompt_encoder only — fine-tuned weights Auto-loaded when present; overrides the base decoder/prompt_encoder

The 16 MB FT checkpoint is small because the image encoder stays frozen at the base SAM2 weights. Training data: ~800 (image, bbox, ground-truth-mask) triples pulled from manually-corrected Hanoi episodes. Per-ring validation IoU (cross-episode held-out solve):

Ring Vanilla SAM2 base Hanoi-FT Δ
ring_1 (red) 0.786 0.851 +6.5 pp
ring_2 (yellow) 0.803 0.842 +3.9 pp
ring_3 (green) 0.814 0.854 +4.0 pp
ring_4 (blue) 0.794 0.846 +5.2 pp
macro mean 0.799 0.848 +4.9 pp

Biggest gains are on partially-gripper-occluded rings where vanilla SAM2 tended to oversegment onto the gripper finger.

Usage in the world-model prediction loop. At inference time you don't need to run the full auto_label.py pipeline. Use the provided single-frame inferer:

from tools.hanoi_pipeline.infer_graph_from_frame import HanoiGraphInferer

inferer = HanoiGraphInferer()        # loads base + FT once
result = inferer(rgb_image, depth=depth_image)

graph      = result["graph"]         # side_graph.json schema
masks      = result["masks"]         # {ring_1..ring_4: (H, W) uint8}
embeddings = result["embeddings"]    # {ring_1..ring_4: (256,) float32}
depth_info = result["depth_info"]    # flat-keyed 3D bundle (empty if depth=None)
states     = result["ring_states"]   # {ring_id: RingState(peg, stack_index)}

This returns the same schema as the offline pipeline's per-frame output, so the PyG loaders work identically on both sources. Override the checkpoint via SAM2_FINETUNE_CKPT=<path>; set to empty string to force vanilla SAM2.

Desktop Disassembly Domain

Components (9 types)

Eight product types + one robot agent. Multiple instances (e.g. ram_1, ram_2) share the same 10-D type encoding and are disambiguated by SAM2 embedding + 3D position.

Index Type Color Notes
0 cpu_fan #FF6B6B Always visible at start
1 cpu_bracket #4ECDC4 Hidden at start (under fan)
2 cpu #45B7D1 Hidden at start
3 ram_clip #96CEB4 Multi-instance
4 ram #FFEAA7 Multi-instance
5 connector #DDA0DD Multi-instance
6 graphic_card #FF8C42 Always visible
7 motherboard #8B5CF6 Always visible (base)
8 robot #F5F5F5 Agent node (stored separately in side_robot/)

Sparse constraint edges

Directed prerequisite relations — A -> B means "A must be removed before B can be removed":

cpu_fan      -> cpu_bracket         (fan covers bracket)
cpu_fan      -> motherboard
cpu_bracket  -> cpu
cpu_bracket  -> motherboard
cpu          -> motherboard
ram_N        -> motherboard
ram_clip_N   -> motherboard
ram_clip_N   -> ram_M               (user pairs manually)
connector_N  -> motherboard
graphic_card -> motherboard

Typical episode has 10-15 product nodes and 10-14 stored directed edges.

Node feature layout (270-D)

[0   : 256]   SAM2 embedding (256)       — masked avg pool over vision_features
[256 : 259]   3D position (3)            — centroid in camera frame (meters)
[259 : 269]   type encoding (10)         — fixed 10-D vector from
                                            config/type_encoding_<method>.yaml
                                            (shared across domains)
[269]         visibility (1)             — 1 if visible this frame, else 0

Total: 270-D. The 10-D type slot is a deterministic encoding (NOT trained) — see "Fixed 10-D type encoding — how it's made" below.

Available Desktop episodes

Session / Episode Labeled frames Goal
session_0408_162129/episode_00 346 cpu_fan
session_0410_125013/episode_00 473 cpu_fan
session_0410_125013/episode_01 525 graphic_card

Total: 1344 frames.

Tower of Hanoi Domain

Components (4 types) — rings only, no robot node in v1

Hanoi episodes use native ring IDs (ring_1 .. ring_4) in components and as npz keys — no desktop-proxy remapping, and no robot node in v1. type_vocab is ["ring_1", "ring_2", "ring_3", "ring_4"] (length 4). Robot segmentation is deferred; side_robot/*.npz is zero-filled per frame for format uniformity but never becomes a graph node.

Note on V2 vs V3 for Hanoi. V2 (with_robot — robot as graph node) requires a labeled robot mask/embedding and is therefore Desktop-only in v1. V3 (with_robot_state / with_robot_action) uses the 13-D robot_states.npy trace, which IS recorded for Hanoi too — so V3 loaders work for both domains.

ID Color Disk size Role
ring_1 red (#E63946) 32 mm Smallest
ring_2 yellow (#F1C40F) 42 mm
ring_3 green (#2ECC71) 52 mm
ring_4 blue (#2E86DE) 62 mm Largest

Mask .npz files carry the literal keys ring_1, ring_2, ring_3, ring_4. No robot in type_vocab, no robot edges, no robot node appended at load time.

Mission kinds (40 / 40 / 20 sampling)

Every Hanoi metadata.json records mission_kind, goal_prompt, initial_state, target_state, and solver_moves. The sampler picks a kind per episode:

Kind Weight Target
classical 0.40 All 4 rings stacked in size order on one peg
single_ring 0.40 One designated ring moved to a new peg; every other ring returns to its initial peg in size order
rearrange 0.20 Uniformly sampled valid (larger-under-smaller) configuration

Physical peg layout (important for prompt grounding)

Throughout the dataset, rings move between three pegs labelled A, B, and C. A text-conditioned world model has no way to know which letter corresponds to which physical peg, so the goal_prompts use physically-grounded labels paired with the (peg A/B/C) cross-reference:

Label Side-camera view (primary) Wrist-camera view (auxiliary)
peg Athe near peg closest to the camera (bottom of image) right side of the frame
peg Bthe middle peg middle of the image middle of the frame
peg Cthe far peg farthest from the camera (top of image) left side of the frame

All structural fields (initial_state, target_state, solver_moves, edge lists in side_graph.json) continue to use the letter labels A/B/C — they're stable identifiers that downstream loaders already depend on. Only the natural-language goal_prompt uses the descriptive names.

Goal-prompt format (run Step 10 once after download)

The goal_prompt is the self-contained, natural-language task description a video world model (e.g. Cosmos Predict 2.5) reads:

"Starting state: <S>.  Task: <T>.  Target state: <G>."

where <S> and <G> enumerate the ring layout top-of-stack first per peg, using the grounded labels above, and <T> is a plain-English instruction derived from mission_kind. Full examples after running Step 10:

# classical
"Starting state: red → green (top → bottom) on the near peg (peg A);
                 yellow → blue (top → bottom) on the far peg (peg C).
 Task: stack all rings onto the far peg in size order (smallest on top),
       solving the Hanoi puzzle.
 Target state: red → yellow → green → blue (top → bottom) on the far peg."

# single_ring
"Starting state: green alone on the near peg (peg A); blue alone on the middle peg (peg B);
                 red → yellow (top → bottom) on the far peg (peg C).
 Task: move the blue ring from the middle peg to the near peg; every other ring must end up
       back on its starting peg, sorted smallest-on-top.
 Target state: green → blue (top → bottom) on the near peg; red → yellow on the far peg."

# rearrange
"Starting state: red → yellow → green → blue (top → bottom) on the far peg (peg C).
 Task: rearrange the rings into the target configuration below; any legal move sequence that
       reaches the target is acceptable.
 Target state: blue alone on the near peg; red → yellow → green on the middle peg."

Why a separate script? The dataset shipped to HF was captured and uploaded over several weeks while the prompt format was iterated. To avoid re-uploading hundreds of gigabytes every time the prompt template improves, the canonical prompt is re-derived locally by examples/10_upgrade_prompts.py from three stable structural fields that never change: mission_kind, initial_state, target_state. Run it once after download and every episode's goal_prompt + side_graph.json gets normalised to the form above.

Structural edges (static, always 6)

The 6 smaller → larger directed pairs are stored verbatim in side_graph.json:

ring_1 -> ring_2     ring_1 -> ring_3     ring_1 -> ring_4
                     ring_2 -> ring_3     ring_2 -> ring_4
                                          ring_3 -> ring_4

At PyG load time the loader expands to 4 × 3 = 12 fully-connected directed edges. The reverse (larger → smaller) direction carries the same has_constraint / is_locked but flipped src_blocks_dst.

Per-frame is_locked semantics

is_locked = 1 on edge (A, B) iff A is currently the immediately-stacked ring on top of B on the same peg (adjacent in the peg-stack with A above B). Every other pair — non-adjacent on the same peg, on different pegs, or with either ring in transit — gets is_locked = 0. This is strictly "physical stacking right now," not "A must move before B."

Held-ring rule (captures "constraint broken during transit")

When the robot holds a ring (gripper closed between grasp and release of that move), the ring is in transit and no longer touches any other ring. The auto-labeler flags held = 1 for that ring on every held frame, and every edge touching it gets is_locked = 0 — the constraint is physically broken mid-move. On release, the new adjacency emerges and that edge flips back to is_locked = 1.

Implementation: auto_label.py reads robot_states.npy[:, 12] (gripper position, Robotiq 2F-85, 0-255) and detects grasp intervals via baseline-mode thresholding (estimate "resting open" mode, threshold at baseline + margin, binary-close morphologically to bridge single-frame glitches). It then zips the resulting intervals with solver_moves in order — the k-th grasp interval is assigned to the k-th move. Validated on ep_00 (1 move, 1 interval), ep_01 (15 moves, 15 intervals), ep_02 (1 move, 1 interval). Per-frame held deltas are recorded as frame_states[f].held = {ring_id: True|False}.

Rule 2 — "larger must never sit on smaller"

Encoded without a new feature via the edge's existing src_blocks_dst bit:

Edge direction src_blocks_dst Meaning
smaller → larger (e.g. ring_1 -> ring_3) 1 Legal — smaller may rest on larger
larger → smaller (e.g. ring_3 -> ring_1) 0 Illegal — larger may not rest on smaller

Three dimension-preserving ways the world model can respect Rule 2:

Method Where One-liner Guarantee
Training loss objective λ * (pred_is_locked * (1 - src_blocks_dst)).sum() Soft (shapes distribution)
Rollout mask inference Reject any predicted is_locked = 1 where src_blocks_dst = 0 Hard (eliminates illegal)
Dataset invariant this spec is_locked is never 1 on a larger→smaller edge in any training frame Hard (on training distribution)

Node feature layout (270-D)

[0   : 256]   SAM2 embedding (256)
[256 : 259]   3D position (3)
[259 : 269]   type encoding (10)         — fixed 10-D vector from
                                            config/type_encoding_<method>.yaml
                                            (shared with Desktop)
[269]         visibility (1)

Total: 270-D — identical to Desktop. The 10-D encoding is domain-independent; unknown/unlisted types encode to a zero vector.

Mission metadata saved per episode

Every Hanoi side_graph.json carries goal_prompt, mission_kind, and target_state in addition to the fields shared with Desktop. Per-frame transitions (grasps, releases, re-stacks) are recorded as deltas in frame_states[f] with constraints, visibility, and held sub-dicts.

Hanoi episodes available

Session Episodes Frames Storage Notes
hanoi/session_hanoi_0415_190808 3 7,479 expanded Initial Hanoi pilot: 1 × classical 15-move solve + 2 × single-ring moves (manual + teleop)
hanoi/session_hanoi_0417_133613 7 10,968 expanded Autonomous orchestrator, initial 4-stack on peg B, 40/40/20 mission mix, 1-10 moves/episode
hanoi/session_hanoi_0417_144403 20 30,942 expanded Autonomous orchestrator, initial 4-stack on peg A, 40/40/20 mission mix, 1-10 moves/episode
hanoi/session_hanoi_0417_164816 20 64,185 18 expanded + 2 zips Autonomous orchestrator, initial 4-stack on peg C, min 3 moves/episode (no upper cap). episode_18.zip + episode_19.zip zipped
hanoi/session_hanoi_0420_132840 20 85,790 20 zips Autonomous orchestrator, initial 2+0+2 (ring_1/ring_3 on peg A, ring_2/ring_4 on peg C), min 3 moves/episode (no cap). Every episode stored as episode_XX.zip
hanoi/session_hanoi_0423_165447 20 84,837 20 zips Autonomous orchestrator, initial 2+1+1 (ring_1+ring_4 on peg A, ring_3 on peg B, ring_2 on peg C), 50/25/25 mission mix (10 classical / 5 single_ring / 5 rearrange), min 5 / max 15 moves/episode — first session with both move bounds. Every episode stored as episode_XX.zip

Total across all Hanoi sessions: 90 episodes, 284,201 frames. Each episode_XX/metadata.json records the exact mission_kind, goal_prompt, initial_state, target_state, and solver_moves for that episode. All autonomous sessions are produced by scripts/hanoi/orchestrator.py, which pre-plans all N missions upfront from the captured initial state, resamples any mission exceeding the per-episode move cap, and records a deterministic solver reference trajectory for each accepted mission.

Zipped episodes. Three sessions have episodes stored as uncompressed (zip -0) archives rather than expanded directory trees — session_hanoi_0417_164816 has only its last two episodes zipped, while every episode of session_hanoi_0420_132840 and session_hanoi_0423_165447 is a zip. This is because HuggingFace datasets have a hard cap of 1 million files per repository, and expanding every annotated frame (∼14 files per frame × ~280 K frames across all sessions) would have exceeded it. Extract before use:

# session_hanoi_0417_164816 — only 2 zipped episodes
cd hanoi/session_hanoi_0417_164816
unzip episode_18.zip       # → episode_18/
unzip episode_19.zip       # → episode_19/

# all-zip sessions — every episode is a zip
for sess in session_hanoi_0420_132840 session_hanoi_0423_165447; do
    cd "hanoi/$sess"
    for z in episode_*.zip; do unzip "$z"; done
    cd -
done

Once unzipped, the on-disk layout is identical to every other episode_XX/ directory in this dataset (same metadata.json, robot_states.npy, side/, wrist/, annotations/ tree, loadable by the exact same PyG loaders below). Expanded sessions require no pre-processing.

Graph generation for Hanoi (reference)

The full pipeline that produced every annotations/ tree above is checked in under tools/hanoi_pipeline/ in this repo. For the pipeline overview, algorithm details, and SAM2 checkpoint stats see the Pipeline and SAM2 models sections above. For the single-frame runtime inferer (use it inside a world-model prediction loop to turn a predicted RGB back into a graph), see tools/hanoi_pipeline/infer_graph_from_frame.py and tools/hanoi_pipeline/README.md.

Per-frame graph retrieval — how it works (important)

Every frame in every episode has its own distinct graph. The dataset stores them as a (structural skeleton + per-frame deltas) decomposition rather than N JSON files per episode, because the skeleton is the same every frame and the deltas are small. This cuts ~6000× disk-space per episode while losing zero information — the loader reconstructs each frame's full graph on demand.

Where each piece of a per-frame graph lives:

Component of the frame-T graph File
Node list (which rings exist) + structural edges (smaller→larger pairs) annotations/side_graph.jsoncomponents, edges (shared across all frames)
is_locked / visibility / held as of frame T annotations/side_graph.jsonframe_states (delta-encoded up to T)
SAM2 mask of each ring at frame T annotations/side_masks/frame_TTTTTT.npz
256-D SAM2 embedding at frame T annotations/side_embeddings/frame_TTTTTT.npz
3D position (centroid) + bbox + depth-valid flag at frame T annotations/side_depth_info/frame_TTTTTT.npz
Robot state at frame T robot_states.npy[T] (13-D)

The PyG loader combines these into a torch_geometric.data.Data object for exactly that frame — node features differ per frame (new embeddings + new 3D positions + new visibility flags), and edge features differ per frame (is_locked bits flip as rings are stacked / unstacked / held mid-transit).

To get a distinct graph for every labeled frame in an episode: use the list_all_frame_graphs helper below, or run scripts/materialize_per_frame_graphs.py to materialize them as individual .pt (and optional .json) files on disk.

Where edge-feature transitions live

The 3-D edge_attr vector is [has_constraint, is_locked, src_blocks_dst]. Of these, only is_locked changes over time — it flips when a ring lifts off / lands on another ring (or enters/exits the held state mid-transit). has_constraint and src_blocks_dst are static per edge.

Every transition of is_locked (and every transition of held) is recorded as a delta in side_graph.json under frame_states. The key is the frame index at which the transition happens; the value lists exactly which entries changed. Example from a real Hanoi single-move episode:

"frame_states": {
  "0":   {"constraints": {"ring_1->ring_2": true,  "ring_2->ring_3": true,
                          "ring_3->ring_4": true}},                // initial stack
  "134": {"constraints": {"ring_1->ring_2": false},                // ring_1 lifted OFF ring_2
          "held":        {"ring_1": true}},                        // ring_1 now in transit
  "278": {"constraints": {"ring_1->ring_3": true},                 // ring_1 placed on ring_3
          "held":        {"ring_1": false}}
}

The loader's resolve_frame_state(graph_json, T) walks frame_states in ascending key order up to T, applies every listed constraint/held delta, and returns the resolved state at frame T. That resolved state then populates edge_attr[:, 1] (the is_locked column) and the held flags that zero out edges touching rings in transit. So for frame 200 in the example above, ring_1->ring_2 is unlocked and every other edge touching ring_1 is also unlocked (held-ring rule), whereas ring_3->ring_4 is still locked (never changed).

Bottom line: there's no separate edge-feature file per frame — the transitions are packed into one delta dict in side_graph.json, and the loader replays them to give you the exact edge_attr for whichever frame you ask for.

Shared: PyG edge feature semantics (3-D, both domains)

edge_attr[k] = [has_constraint, is_locked, src_blocks_dst]

has_constraint is_locked src_blocks_dst Meaning
0 0 0 No physical constraint — message passing only. Used for: robot ↔ anything; Hanoi larger → smaller (non-edge at the pair level)
1 1 1 Constraint active, src is the blocker (physical Desktop) / src rests on top (physical Hanoi)
1 1 0 Same pair, reverse direction — src is the blocked / src is underneath
1 0 1 Constraint released, src was the blocker / legal rest direction with no contact right now
1 0 0 Same released pair, reverse direction

Symmetry invariants: has_constraint and is_locked are symmetric per unordered pair (same value for (i, j) and (j, i)). src_blocks_dst flips between the two directions. Robot ↔ anything edges are always [0, 0, 0].

Shared: Fixed 10-D type encoding — how it's made

Across both domains the component-type universe is 13 types (the two vocabularies unioned):

cpu_fan, cpu_bracket, cpu, ram_clip, ram, connector, graphic_card, motherboard,
ring_1, ring_2, ring_3, ring_4, robot

Each type is assigned a fixed 10-D vector. The encoding is NOT trained — it is a deterministic lookup read from a YAML at load time, so any consumer of the dataset gets the exact same node features bit-for-bit. Two methods are provided; both YAMLs live at the dataset repo root alongside the session directories:

Method YAML file How vectors are built Semantic structure
random config/type_encoding_random.yaml numpy.random.default_rng(42) unit-norm 10-vectors, one per type None — vectors are orthogonal-ish noise
clip config/type_encoding_clip.yaml CLIP ViT-B/32 text embedding of a humanised prompt (e.g. "a CPU fan", "a small red ring") → PCA to 10 → unit-normalise Related types cluster (the four rings are close; the fan/bracket/cpu cluster is tight)

Unknown type → 10-D zero vector. If a component's type is not in the YAML, the loader returns np.zeros(10, dtype=np.float32) for that slot. This keeps node dim at 270 regardless of vocabulary drift.

To reproduce or extend: download whichever YAML you want from the dataset repo root, load it with yaml.safe_load, and look up each component's type. The loader code below shows the full pattern.

Shared: PyG loader — self-contained Python

Prerequisites

pip install torch numpy torch_geometric pillow pyyaml

Save as gnn_world_model_loader.py

The key design property: node_dim = 256 + 3 + 10 + 1 = 270 for both domains. The 10-D type slot comes from the fixed YAML encoding (loaded once), so there's no domain branching — Desktop, Hanoi, and any future vocabulary all produce 270-D nodes.

import json
from dataclasses import dataclass
from functools import lru_cache
from pathlib import Path
from typing import Dict, List, Optional
import numpy as np
import torch
import yaml
from torch_geometric.data import Data

# ---------- constants ----------
TYPE_ENCODING_DIM = 10          # fixed, domain-independent
SAM2_EMB_DIM = 256
POS_DIM = 3
VIS_DIM = 1
NODE_DIM = SAM2_EMB_DIM + POS_DIM + TYPE_ENCODING_DIM + VIS_DIM   # = 270
ROBOT_STATE_DIM = 13            # [j0..j5, tcp_x, tcp_y, tcp_z, tcp_rx, tcp_ry, tcp_rz, gripper_pos]


# ---------- fixed type encoding ----------
# Download once from the dataset repo root:
#   config/type_encoding_random.yaml   (seeded numpy unit vectors, seed=42)
#   config/type_encoding_clip.yaml     (CLIP ViT-B/32 text → PCA(10) → unit-norm)
# Point TYPE_ENCODING_ROOT at wherever you saved them.
TYPE_ENCODING_ROOT = Path("./config")


@lru_cache(maxsize=4)
def load_type_encoding(encoding_method: str = "random") -> Dict[str, np.ndarray]:
    """Load the fixed 10-D per-type encoding from YAML. Cached across calls."""
    path = TYPE_ENCODING_ROOT / f"type_encoding_{encoding_method}.yaml"
    with open(path) as f:
        raw = yaml.safe_load(f)
    return {k: np.asarray(v, dtype=np.float32) for k, v in raw.items()}


def type_encode(comp_type: str, encoding_method: str = "random") -> np.ndarray:
    """Return 10-D vector for `comp_type`; zeros for unknown types."""
    table = load_type_encoding(encoding_method)
    vec = table.get(comp_type)
    if vec is None:
        return np.zeros(TYPE_ENCODING_DIM, dtype=np.float32)
    return vec.astype(np.float32)


# ---------- file helpers ----------
def list_labeled_frames(episode_dir: Path) -> List[int]:
    mask_dir = episode_dir / "annotations" / "side_masks"
    if not mask_dir.exists():
        return []
    frames = []
    for p in mask_dir.glob("frame_*.npz"):
        try:
            frames.append(int(p.stem.split("_")[1]))
        except (ValueError, IndexError):
            continue
    return sorted(frames)


def resolve_frame_state(graph_json: dict, frame_idx: int):
    constraints, visibility = {}, {}
    for c in graph_json["components"]:
        visibility[c["id"]] = True
    for e in graph_json["edges"]:
        constraints[f"{e['src']}->{e['dst']}"] = True
    fs_dict = graph_json.get("frame_states", {})
    for f in sorted([int(k) for k in fs_dict]):
        if f > frame_idx:
            break
        fs = fs_dict[str(f)]
        for k, v in fs.get("constraints", {}).items():
            constraints[k] = v
        for k, v in fs.get("visibility", {}).items():
            visibility[k] = v
    return constraints, visibility


@dataclass
class FrameData:
    graph: dict
    masks: dict
    embeddings: dict
    depth_info: dict
    robot: Optional[dict]
    constraints: dict
    visibility: dict


def load_frame_data(episode_dir, frame_idx):
    anno = Path(episode_dir) / "annotations"
    with open(anno / "side_graph.json") as f:
        graph = json.load(f)
    def _npz(p):
        if not p.exists(): return {}
        d = np.load(p)
        return {k: d[k] for k in d.files}
    masks = _npz(anno / "side_masks" / f"frame_{frame_idx:06d}.npz")
    embeddings = _npz(anno / "side_embeddings" / f"frame_{frame_idx:06d}.npz")
    depth_info = _npz(anno / "side_depth_info" / f"frame_{frame_idx:06d}.npz")
    robot = None
    rp = anno / "side_robot" / f"frame_{frame_idx:06d}.npz"
    if rp.exists():
        r = np.load(rp)
        if r["visible"][0] == 1:
            robot = {k: r[k] for k in r.files}
    constraints, visibility = resolve_frame_state(graph, frame_idx)
    return FrameData(graph, masks, embeddings, depth_info, robot, constraints, visibility)


def _build_product_node_features(nodes, fd, encoding_method):
    feats = []
    for node in nodes:
        cid = node["id"]
        emb = fd.embeddings.get(cid, np.zeros(SAM2_EMB_DIM, dtype=np.float32))
        dvk = f"{cid}_depth_valid"; ck = f"{cid}_centroid"
        if dvk in fd.depth_info and int(fd.depth_info[dvk][0]) == 1:
            pos = fd.depth_info[ck].astype(np.float32)
        else:
            pos = np.zeros(POS_DIM, dtype=np.float32)
        vis = 1.0 if fd.visibility.get(cid, True) else 0.0
        if vis == 0.0:
            emb = np.zeros(SAM2_EMB_DIM, dtype=np.float32)
            pos = np.zeros(POS_DIM, dtype=np.float32)
        feats.append(np.concatenate([
            emb.astype(np.float32),
            pos,
            type_encode(node["type"], encoding_method),
            np.array([vis], dtype=np.float32),
        ]))
    if not feats:
        return torch.empty((0, NODE_DIM), dtype=torch.float32)
    return torch.tensor(np.stack(feats), dtype=torch.float32)


def _build_product_edges(nodes, graph, fd):
    N = len(nodes)
    constraint_set = {(e["src"], e["dst"]) for e in graph["edges"]}
    pair_forward = {frozenset([s, d]): (s, d) for s, d in constraint_set}
    src_idx, dst_idx, edge_attr = [], [], []
    for i in range(N):
        for j in range(N):
            if i == j: continue
            src_id, dst_id = nodes[i]["id"], nodes[j]["id"]
            src_idx.append(i); dst_idx.append(j)
            key = frozenset([src_id, dst_id])
            if key in pair_forward:
                fwd = pair_forward[key]
                is_locked = fd.constraints.get(f"{fwd[0]}->{fwd[1]}", True)
                sb = 1.0 if src_id == fwd[0] else 0.0
                edge_attr.append([1.0, 1.0 if is_locked else 0.0, sb])
            else:
                edge_attr.append([0.0, 0.0, 0.0])
    return src_idx, dst_idx, edge_attr


# ---------- 1) products-only (Option 1: direct graph encoding) ----------
def load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method: str = "random"):
    fd = load_frame_data(episode_dir, frame_idx)
    nodes = fd.graph["components"]
    x = _build_product_node_features(nodes, fd, encoding_method)
    src, dst, ea = _build_product_edges(nodes, fd.graph, fd)
    return Data(
        x=x,
        edge_index=torch.tensor([src, dst], dtype=torch.long),
        edge_attr=torch.tensor(ea, dtype=torch.float32),
        y=torch.tensor([frame_idx], dtype=torch.long),
        num_nodes=len(nodes),
    )


# ---------- 2) V2 ablation: robot as graph NODE (Desktop only) ----------
def load_pyg_frame_with_robot(episode_dir, frame_idx, encoding_method: str = "random"):
    fd = load_frame_data(episode_dir, frame_idx)
    # Hanoi has no robot mask/embedding in v1 → fall back to products-only.
    if fd.robot is None:
        return load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method)

    products = fd.graph["components"]
    N_prod = len(products); N = N_prod + 1

    x_prod = _build_product_node_features(products, fd, encoding_method)
    robot_emb = fd.robot["embedding"].astype(np.float32)
    robot_pos = (fd.robot["centroid"].astype(np.float32)
                 if int(fd.robot["depth_valid"][0]) == 1
                 else np.zeros(POS_DIM, dtype=np.float32))
    robot_feat = np.concatenate([
        robot_emb, robot_pos,
        type_encode("robot", encoding_method),
        np.array([1.0], dtype=np.float32),
    ])
    x = torch.cat([x_prod, torch.tensor(robot_feat, dtype=torch.float32).unsqueeze(0)], dim=0)

    src, dst, ea = _build_product_edges(products, fd.graph, fd)
    robot_idx = N_prod
    for i in range(N_prod):
        src.append(robot_idx); dst.append(i); ea.append([0.0, 0.0, 0.0])
        src.append(i); dst.append(robot_idx); ea.append([0.0, 0.0, 0.0])

    data = Data(
        x=x,
        edge_index=torch.tensor([src, dst], dtype=torch.long),
        edge_attr=torch.tensor(ea, dtype=torch.float32),
        y=torch.tensor([frame_idx], dtype=torch.long),
        num_nodes=N,
    )
    data.robot_point_cloud = torch.tensor(fd.robot["point_cloud"], dtype=torch.float32)
    data.robot_pixel_coords = torch.tensor(fd.robot["pixel_coords"], dtype=torch.int32)
    data.robot_mask = torch.tensor(fd.robot["mask"], dtype=torch.uint8)
    return data


# ---------- 3) V3 recommended: products graph + robot_state side-tensor ----------
def load_pyg_frame_with_robot_state(episode_dir, frame_idx, encoding_method: str = "random"):
    data = load_pyg_frame_products_only(episode_dir, frame_idx, encoding_method)
    robot_states = np.load(Path(episode_dir) / "robot_states.npy")   # (T, 13) float32
    rs = robot_states[frame_idx].astype(np.float32)                  # 13-D
    data.robot_state = torch.tensor(rs, dtype=torch.float32)
    return data


# ---------- 4) V3 action-conditioned: + robot_action delta ----------
def load_pyg_frame_with_robot_action(episode_dir, frame_idx, encoding_method: str = "random"):
    data = load_pyg_frame_with_robot_state(episode_dir, frame_idx, encoding_method)
    robot_states = np.load(Path(episode_dir) / "robot_states.npy")   # (T, 13)
    T = robot_states.shape[0]
    if frame_idx + 1 < T:
        action = robot_states[frame_idx + 1] - robot_states[frame_idx]
    else:
        action = np.zeros(ROBOT_STATE_DIM, dtype=np.float32)
    data.robot_action = torch.tensor(action.astype(np.float32), dtype=torch.float32)
    return data


# ---------- 5) Generator: one distinct graph per labeled frame ----------
_VARIANTS = {
    "products_only":     load_pyg_frame_products_only,
    "with_robot":        load_pyg_frame_with_robot,
    "with_robot_state":  load_pyg_frame_with_robot_state,
    "with_robot_action": load_pyg_frame_with_robot_action,
}


def list_all_frame_graphs(
    episode_dir,
    variant: str = "with_robot_state",
    encoding_method: str = "random",
):
    """Yield (frame_idx, Data) for every labeled frame in an episode.

    Each `Data` object is the full per-frame graph (node features, edges,
    edge features, and any requested side tensors). Feature values and
    `is_locked` bits differ per frame as rings move / stack / get held.
    """
    if variant not in _VARIANTS:
        raise ValueError(f"variant must be one of {list(_VARIANTS)}, got {variant!r}")
    loader = _VARIANTS[variant]
    for f in list_labeled_frames(Path(episode_dir)):
        yield f, loader(episode_dir, f, encoding_method=encoding_method)

Usage examples

All four loaders share the signature (episode_dir, frame_idx, encoding_method="random"). Swap "random" for "clip" to use the CLIP-derived encoding instead.

Desktop V1 — 15 product nodes, 270-D features, fully-connected edges (15×14 = 210):

from pathlib import Path
from gnn_world_model_loader import load_pyg_frame_products_only

episode = Path("session_0408_162129/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=42)
print(data)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3])

Desktop V3 (recommended) — same graph + 13-D robot_state side-tensor:

from gnn_world_model_loader import load_pyg_frame_with_robot_state

data = load_pyg_frame_with_robot_state(episode, frame_idx=42)
print(data)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3], robot_state=[13])

Desktop V3 action-conditioned — adds 13-D delta for the next frame:

from gnn_world_model_loader import load_pyg_frame_with_robot_action

data = load_pyg_frame_with_robot_action(episode, frame_idx=42)
# → Data(x=[15, 270], edge_index=[2, 210], edge_attr=[210, 3],
#         robot_state=[13], robot_action=[13])

Hanoi V1 — 4 ring nodes, 270-D features, 12 fully-connected edges:

episode = Path("hanoi/session_hanoi_0415_190808/episode_00")
data = load_pyg_frame_products_only(episode, frame_idx=250)
print(data)
# → Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3])

Hanoi V3 (recommended) — V3 works for Hanoi too because robot_states.npy is recorded for every episode:

data = load_pyg_frame_with_robot_state(episode, frame_idx=250)
print(data)
# → Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3], robot_state=[13])

V2 note. load_pyg_frame_with_robot falls back to load_pyg_frame_products_only on Hanoi (no robot mask), so for Hanoi V1 and V2 return identical graphs. On Desktop V2 attaches the robot as a 16-th node (x shape becomes [16, 270]).

How to use this dataset — full instructions

All common tasks ship as runnable Python scripts in this repo. No copy-pasting from the README — download, run, get results.

Everything included in this repo

File / Directory What it does
gnn_world_model_loader.py (root) PyG loader — reads annotations + robot_states, produces one Data per frame
config/type_encoding_{random,clip}.yaml Fixed 10-D per-type encoding YAMLs (loader reads these)
examples/01_inspect_episode.py Step 1 — inspect per-frame graphs in one episode
examples/02_train_gnn.py Step 2 — train an is_locked GAT + save checkpoint
examples/03_infer_from_rgb.py Step 3 — RGB+depth → graph (SAM2)
examples/04_materialize_per_frame.py Step 4 — dump .pt/.json per frame
examples/05_build_canonical_position_lut.py Step 5 — aggregate (ring,peg,stack) → 3D centroid LUT
examples/06_infer_from_predicted_rgb.py Step 6 — RGB-only (no depth) → graph via LUT (for world-model-predicted frames)
examples/07_gnn_to_worldmodel_latent.py Step 7 — load frozen pretrained GNN → per-frame WM latent
examples/08_joint_train_gnn_cosmos.py Step 8 — joint train GNN + Cosmos Predict 2.5
examples/09_verify_goal_prompts.py Step 9 — inspect / sanity-check every episode's goal_prompt vs target_state
examples/10_upgrade_prompts.py Step 10 — one-off, run right after download to normalise every goal_prompt into the canonical grounded form
tools/hanoi_pipeline/ Full Hanoi auto-labeling pipeline (SAM2-FT checkpoint, configs, auto_label.py, infer_graph_from_frame.py, src/ modules)

Step numbers match the numeric prefix on every script so you can cross-reference at a glance. Every script is self-pathing (resolves the loader / configs / pipeline tools relative to its own location), so cd into the repo root or run the scripts with absolute paths — either works.

Step 0 — Download the dataset

pip install huggingface_hub torch torch_geometric numpy pyyaml opencv-python pillow

# Full pull (~150 GB)
hf download ChangChrisLiu/GNN_Disassembly_WorldModel --repo-type dataset --local-dir ./gnn_world_model

# Or slim pull — just one Hanoi episode plus the code + configs:
hf download ChangChrisLiu/GNN_Disassembly_WorldModel --repo-type dataset \
    --include "gnn_world_model_loader.py" "examples/*" "config/*" \
               "tools/hanoi_pipeline/**" \
               "hanoi/session_hanoi_0415_190808/episode_00/**" \
    --local-dir ./gnn_world_model

cd gnn_world_model

The two large episodes in session_hanoi_0417_164816 are stored as uncompressed zip archives (HF has a 1 M file-per-repo cap). Unzip once:

cd hanoi/session_hanoi_0417_164816
unzip episode_18.zip    # → episode_18/
unzip episode_19.zip    # → episode_19/
cd ../..

After this, every example script resolves imports and data paths automatically.

SAM2 base checkpoint (needed only for Steps 3, 6, and the Bonus auto-label)

Steps 3 and 6 — and the "Bonus: reproducing the dataset" section — load Meta AI's SAM2 base model to perform segmentation on new RGB inputs. We ship the Hanoi fine-tuned decoder (tools/hanoi_pipeline/checkpoints/sam2_hanoi_ft.pt), but not the 320 MB SAM2 base — download it from Meta's repo:

# 1. Clone SAM2 (installs the Python package + the configs/)
git clone https://github.com/facebookresearch/sam2
pip install -e ./sam2

# 2. Download the base checkpoint (or see sam2/checkpoints/download_ckpts.sh)
# 3. Place / symlink it where tools/hanoi_pipeline/ can find it:
mkdir -p tools/hanoi_pipeline/sam2/checkpoints
cp /path/to/sam2.1_hiera_base_plus.pt tools/hanoi_pipeline/sam2/checkpoints/
# OR symlink your existing SAM2 install root:
ln -sf /absolute/path/to/your/sam2 tools/hanoi_pipeline/sam2

Once the checkpoint is reachable at tools/hanoi_pipeline/sam2/checkpoints/sam2.1_hiera_base_plus.pt, Steps 3 and 6 work directly. Steps 1, 2, 4, 5, 7, 8 don't need SAM2 — they only use the already-labeled annotations shipped with the dataset.

Step 1 — Load one episode and inspect its per-frame graphs

Verified script (see examples/01_inspect_episode.py):

python examples/01_inspect_episode.py \
    --episode hanoi/session_hanoi_0415_190808/episode_00

It enumerates every labeled frame of the episode, builds a PyG Data for one of them (shape [4, 270] for node features, [12, 3] for edges), prints the current is_locked and src_blocks_dst bits, and shows the locked-edge count varying per frame.

Every labeled frame produces a distinct graph. The static skeleton (which rings exist, which structural pairs are possible) lives in side_graph.json; the time-varying bits (is_locked, held, node features) come from per-frame npz files. The loader reassembles them on demand, which is why a single .json per episode is enough.

The --variant flag (forwarded to the loader) picks which Data schema you want:

Variant What you get
products_only Bare graph — nodes + edges, no robot info
with_robot Desktop V2 — robot as a graph node (falls back to products_only for Hanoi)
with_robot_state Recommended — graph + robot_state=[13] side tensor (works for both domains)
with_robot_action with_robot_state + robot_action=[13] delta for the next frame

Step 2 — Train a GNN on Hanoi graphs

Verified script (see examples/02_train_gnn.py):

python examples/02_train_gnn.py                                   # all Hanoi episodes
python examples/02_train_gnn.py --sessions hanoi/session_hanoi_0415_190808
python examples/02_train_gnn.py --epochs 10 --batch-size 32 --lr 3e-4

The script:

  1. Walks every labeled frame in the selected sessions and materialises a flat list of PyG Data objects.
  2. Wraps them in a PyG DataLoader (auto-batching of nodes + edges + per-graph robot_state).
  3. Trains a 2-layer GATConv predictor for per-edge is_locked, conditioned on the broadcast robot_state.
  4. Adds a Rule-2 compliance termΣ σ(logits) * (1 - legal_mask) over edges, penalising any prediction that would lock a larger→smaller edge.

Key detail baked into the script (and the README's loader code for reference): after PyG batching, data.robot_state is a flat 1-D tensor of length num_graphs × 13 (PyG's default __cat_dim__ = 0 for 1-D attrs). The script reshapes via data.robot_state.view(-1, 13)[data.batch] before concatenating onto node features.

Step 3 — Inference: turn a predicted RGB into a graph

Verified script (see examples/03_infer_from_rgb.py). Requires Meta's SAM2 + base checkpoint installed (see their repo):

python examples/03_infer_from_rgb.py \
    --rgb   hanoi/session_hanoi_0415_190808/episode_00/side/rgb/frame_000100.png \
    --depth hanoi/session_hanoi_0415_190808/episode_00/side/depth/frame_000100.npy \
    --out   /tmp/predicted_graph.json

Internally it loads tools/hanoi_pipeline/infer_graph_from_frame.py's HanoiGraphInferer, which auto-selects tools/hanoi_pipeline/checkpoints/sam2_hanoi_ft.pt (the Hanoi-FT) on top of the vanilla SAM2 base. Output is a 5-field dict identical to the offline pipeline:

Key Shape / type Meaning
graph dict (same schema as side_graph.json) Nodes + structural edges, frame_states empty (single frame has no history)
masks {ring_id: (H, W) uint8} SAM2 mask per detected ring
embeddings {ring_id: (256,) float32} Mask-pooled SAM2 vision features
depth_info flat dict {ring_id_centroid, ring_id_point_cloud, ...} 3-D bundle (empty if --depth omitted)
ring_states {ring_id: RingState(peg, stack_index)} Inferred peg assignment

From this, build a PyG Data by stitching SAM2 embedding + 3-D position + type_encode(c["type"]) + visibility bit per component — exactly what the training loaders do internally. Override the checkpoint path with SAM2_FINETUNE_CKPT=<path>, or set it to empty to force vanilla SAM2.

Step 4 — Materialize per-frame graphs to disk (optional)

Target use: inspect a specific frame's graph outside PyTorch, or hand per-frame graph files to a non-PyTorch consumer.

python examples/04_materialize_per_frame.py \
    hanoi/session_hanoi_0415_190808/episode_00 \
    --out     ./per_frame_graphs \
    --variant with_robot_state \
    --also-json

Writes frame_XXXXXX.pt (and optionally diff-friendly frame_XXXXXX.json) per labeled frame. Reload:

import torch
data = torch.load("per_frame_graphs/frame_000100.pt", weights_only=False)
print(data)          # Data(x=[4, 270], edge_index=[2, 12], edge_attr=[12, 3], robot_state=[13])

For in-process iteration without writing files, use list_all_frame_graphs(...) from the loader module.

Step 5 — Build a canonical (ring, peg, stack_index) → 3-D centroid LUT

Target use: prerequisite for Step 6 (inference on world-model-predicted frames that have no depth). Since the physical rig is fixed, every ring's 3-D centroid at a given (peg, stack_index) is near-constant across all episodes — so you can aggregate them once into a lookup table.

python examples/05_build_canonical_position_lut.py \
    [--hanoi-root hanoi] [--out config/hanoi_canonical_positions.yaml]

Walks every labeled Hanoi frame, buckets (ring_id, peg, stack_index) → centroid, averages, and writes config/hanoi_canonical_positions.yaml. Each row records the mean centroid, per-axis std, and sample count so you can spot-check coverage. The more episodes you run this over, the better the coverage; the full 50-episode HF release exercises essentially all legal (ring, peg, stack) triplets.

Step 6 — RGB → graph on a world-model-predicted frame (no depth)

Target use: during rollout of a video world model (Cosmos Predict 2.5, VideoPoet, DiT, …), each predicted RGB needs to be turned back into a constraint graph — but predicted frames typically don't come with depth. SAM2 still detects rings and their peg / stack_index; you substitute the depth-based centroid with a lookup from Step 5's LUT.

python examples/06_infer_from_predicted_rgb.py \
    --rgb predicted_future_frame.png \
    [--lut config/hanoi_canonical_positions.yaml] \
    [--out graph.json]

Internally this (1) runs HanoiGraphInferer(rgb, depth=None) — returning masks, embeddings, structural edges, and ring_states — then (2) fills depth_info from the LUT by looking up each detected ring's (peg, stack_index). The output is the same 5-field dict as Step 3, so downstream code is unchanged.

LUT misses (e.g. a predicted state you never observed in training) are surfaced explicitly in the script's output; add more labeled sessions to improve coverage, or fall back to vanilla Step 3 when depth is available.

Two ways to wire the GNN into a world model (Steps 7 vs 8)

There are exactly two sensible architectures. Pick one per Step below:

Step 7 — pretrained GNN → WM latent Step 8 — joint GNN + WM training
GNN weights Trained first via Step 2, then frozen Trained jointly with WM backbone
Gradient flow WM → fixed graph latent (stops at GNN) WM ↔ GNN (bidirectional)
Use when WM is a black box you can't backprop through (external API, 3rd-party pipeline), or you want to ablate "does graph conditioning help at all?" You control the WM architecture and want joint optimisation — GNN learns conditioning that's directly useful to the WM's reconstruction loss
Stability GNN stays at the edge-prediction quality Step 2 achieved Joint training can degrade the GNN if the WM loss dominates

Step 7 — Pretrained GNN → world-model conditioning latent

Target use: your GNN is already trained (via Step 2) and you want to USE its per-frame output as an extra conditioning stream for a WM you treat as a black box. GNN weights do not update here.

# 1. Train the GNN (Step 2 already does this; save a checkpoint)
python examples/02_train_gnn.py --epochs 10 --out checkpoints/gnn_is_locked.pt

# 2. Load the checkpoint (frozen), produce [T, 2H] latent per episode, and
#    demonstrate concatenation into the WM's text/context stream:
python examples/07_gnn_to_worldmodel_latent.py \
    --ckpt    checkpoints/gnn_is_locked.pt \
    --episode hanoi/session_hanoi_0415_190808/episode_00 \
    --out     /tmp/wm_conditioning.pt

Step 7 loads the checkpoint's state_dict into the IsLockedPredictor module, freezes it, runs each frame's graph through it to get per-node embeddings [N, H], mean/max pools to [2H], then demonstrates a torch.nn.Linear(2H → wm_text_dim) projection followed by concatenation with the WM's text embedding — exactly what you'd wire into Cosmos Predict 2.5's transformer.context_embedder call site.

Step 8 — Joint training: GNN + Cosmos Predict 2.5 (end-to-end world model)

Target use: you control the WM architecture and want the GNN trained together with the video backbone — gradients flow through both, so the GNN learns conditioning that minimises the WM's reconstruction loss directly.

python examples/08_joint_train_gnn_cosmos.py --epochs 2 --batch-size 2

Architecture:

┌──────────────────────┐
│  Cosmos Predict 2.5  │   ← frozen / LoRA backbone, predicts next-frame RGB
└──────────┬───────────┘
           │ reconstruction loss (pixel / KL)
           ▼
     ┌─────────────────────┐
     │ fusion: cond-token  │ ← [T, 2H] tokens from GraphConditioningEncoder
     │ stream concatenated │   (defined in-file; trained jointly — unlike
     │ alongside Cosmos'   │    Step 7's frozen pretrained path)
     │ text/image stream   │
     └─────────┬───────────┘
               │
               ▼  constraint-aware prediction
         per-edge `is_locked` logits
               │
               ▼  supervision
     BCE  + Rule-2 soft compliance

Total loss: L_wm (reconstruction) + λ_edge · L_edge_BCE + λ_rule · L_rule2.

The script ships a CosmosStub that stands in for the real Cosmos_Predict2_Video2World_Pipeline so it runs on CPU without downloading the 2-B-parameter weights; the docstring shows the exact replacement block for a real run. Gradients flow through GNN → fusion token → Cosmos → pixel loss, so the GNN is literally co-trained with the world model, not bolted on afterward.

Step 9 — Verify every episode's goal_prompt locally

Target use: after hf download, render a self-contained prose description of every episode's task so you can read the dataset without needing to decode what peg A/B/C mean. Also checks that each stored goal_prompt matches what we would canonically derive from its mission_kind + target_state.

# All Hanoi sessions (full prose form with starting state / task / target state)
python examples/09_verify_goal_prompts.py --all-hanoi

# One session
python examples/09_verify_goal_prompts.py --session hanoi/session_hanoi_0420_132840

# One episode
python examples/09_verify_goal_prompts.py --episode hanoi/session_hanoi_0420_132840/episode_04

# One-liner (raw goal_prompt only, no prose expansion)
python examples/09_verify_goal_prompts.py --all-hanoi --compact

# Only flag disagreements between stored goal_prompt and canonical form
python examples/09_verify_goal_prompts.py --all-hanoi --mismatches-only

Full output for a single episode looks like:

episode_04  [single_ring, 15 moves]
  Starting state — green ring (alone) on peg A; blue ring (alone) on peg B;
                   red → yellow  (top → bottom) on peg C.
  Task: move the blue ring from peg B to peg A.  Every other ring must end up
        back on its original peg, sorted smallest-on-top — any intermediate
        displacements must be undone.
  Target state — green → blue  (top → bottom) on peg A;
                 red → yellow  (top → bottom) on peg C.

The top of the output also prints a "physical layout" preamble explaining that peg A is the far peg in the side camera view, peg C is the closest, and peg B sits between them — so the peg letters cross-reference cleanly with the preview videos. On the current HF release you should see 0 prompt-vs-target mismatches.

Step 10 — Normalise all goal_prompts after download (run once)

Target use: the dataset on HF ships with goal_prompt values from different iterations of the prompt template. Run this right after hf download (and unzipping the zipped sessions) to rewrite every episode's goal_prompt into the canonical grounded form documented above.

# Preview (no writes):
python examples/10_upgrade_prompts.py --all-hanoi --dry-run

# Apply:
python examples/10_upgrade_prompts.py --all-hanoi

The script:

  • only touches metadata.json and annotations/side_graph.json (tiny files, near-instant)
  • re-derives the canonical prompt from (mission_kind, initial_state, target_state) — the three fields that never change per episode
  • is idempotent — re-running produces no changes
  • requires no network access

After running, python examples/09_verify_goal_prompts.py --all-hanoi --mismatches-only should report 0 mismatches.

Bonus: reproducing the dataset from a raw robot capture

If you capture your own Hanoi session (raw RGB + depth + robot_states.npy + metadata.json — i.e. before any labeling), run the bundled auto-labeler to produce the full v3 annotations/ tree. Every session already on HF was produced this way.

python tools/hanoi_pipeline/scripts/hanoi/auto_label.py \
    /path/to/session_hanoi_<date>_<time>/

Requires SAM2 base (see Step 0). For each episode it writes:

  • annotations/side_masks/frame_XXXXXX.npz — SAM2 masks (Hanoi-FT auto-selected)
  • annotations/side_embeddings/frame_XXXXXX.npz — 256-D pooled embeddings
  • annotations/side_depth_info/frame_XXXXXX.npz — 3-D positions + bboxes
  • annotations/side_robot/frame_XXXXXX.npz — robot bundle (zero-filled in Hanoi v1)
  • annotations/side_graph.json — structural edges + frame_states deltas (derived from solver_moves + gripper-based held-interval detection)
  • annotations/dataset_card.json — schema pointer

After auto-labeling, every example script above works on the new session unchanged. How the raw session gets captured in the first place (robot arm control, camera sync) is out of scope for this dataset repo — it requires physical hardware.

Shared: common v3 file schemas

side_graph.json

{
  "episode_id": "episode_00",
  "goal_component": "ring_1",            // Desktop: a product id; Hanoi: a ring id
  "view": "side",
  "components": [
    {"id": "ring_1", "type": "ring_1", "color": "#FF0000"}
  ],
  "edges": [
    {"src": "ring_1", "dst": "ring_3", "directed": true}
  ],
  "frame_states": {
    "0":   {"constraints": {"ring_1->ring_3": true},  "visibility": {"ring_1": true}, "held": {}},
    "120": {"constraints": {"ring_1->ring_3": false},                                   "held": {"ring_1": true}}
  },
  "node_positions": {"ring_1": [640, 360]},
  "type_vocab": ["ring_1", "ring_2", "ring_3", "ring_4"],     // Hanoi v1 — no robot
  "embedding_dim": 256,
  "feature_extractor": "sam2.1_hiera_base_plus",

  // Hanoi-only extras:
  "goal_prompt": "Move the red ring to peg B",
  "mission_kind": "single_ring",
  "target_state": {"peg_A": [], "peg_B": ["ring_1"], "peg_C": []}
}

side_depth_info/frame_XXXXXX.npz — 7 flat keys per component

Key Shape Dtype Meaning
{cid}_point_cloud (N, 3) float32 3D points in camera frame (m). (0, 3) if no valid depth
{cid}_pixel_coords (N, 2) int32 (u, v) of valid depth pixels
{cid}_raw_depths_mm (N,) uint16 Filtered to [50, 2000]
{cid}_centroid (3,) float32 Mean of point_cloud; [0,0,0] if invalid
{cid}_bbox_2d (4,) int32 [x1, y1, x2, y2] from mask
{cid}_area (1,) int32 Mask pixel count
{cid}_depth_valid (1,) uint8 1 if N > 0 else 0

side_robot/frame_XXXXXX.npz — always 10 keys

Key Shape Dtype Meaning
visible (1,) uint8 1 if robot labeled, 0 otherwise
mask (H, W) uint8 Binary mask
embedding (256,) float32 SAM2 256-D
point_cloud (N, 3) float32 3D points (m)
pixel_coords (N, 2) int32 (u, v)
raw_depths_mm (N,) uint16 mm
centroid (3,) float32 Mean of point cloud
bbox_2d (4,) int32 From mask
area (1,) int32 Pixel count
depth_valid (1,) uint8 1 if N > 0 else 0

Recording hardware

UR5e + Robotiq 2F-85 gripper; static-mounted Luxonis OAK-D Pro side view with intrinsics fx = 1033.8, fy = 1033.7, cx = 632.9, cy = 359.9; recording at 30 Hz, 1280 × 720 RGB and uint16 depth (mm) filtered to [50, 2000].

License

Released under CC BY 4.0. Use, share, and adapt freely with attribution.

Acknowledgements

Downloads last month
10,327