Datasets:
You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
By accessing this dataset, you agree to the terms of the TASI Benchmark Data Sharing Agreement (https://s3.amazonaws.com/pedestriandataset.situated-intent.net/TASI+Benchmark+Data+Sharing+Agreement_PSI.pdf). This dataset is intended for academic and non-commercial research purposes only.
Log in or Sign Up to review the conditions and access this dataset content.
YAML Metadata Warning:The task_ids "pedestrian-intent-prediction" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation
PSI 2.0 — Pedestrian Scene Intent Dataset
Dataset Summary
PSI 2.0 (Pedestrian Scene Intent) is an egocentric dashcam video dataset for pedestrian crossing intention prediction. All videos are captured from the driver's perspective, introducing a significant domain shift compared to overhead CCTV datasets commonly used in pedestrian behavior research.
This upload is prepared for ECCV 2026 Workshop Track 3: Anomalous Events in Transportation, where PSI 2.0 serves as an out-of-domain test set for evaluating model generalization. The full dataset contains 204 trainval video clips with dense human annotations, with a 40-video held-out test set (newly human-annotated) arriving mid-May 2026.
Key properties:
- Egocentric (dashcam) viewpoint — domain shift from standard overhead CCTV datasets
- Multiple human annotators per video with intent labels, bounding boxes, and free-text reasoning
- LLaVA-format annotations included for direct use with video-language models
Dataset Structure
PSI2.0/
├── trainval/
│ ├── videos/ # 204 mp4 video clips (~15 seconds each)
│ └── annotations/ # per-pedestrian intent labels, bounding boxes, and reasoning
├── test/ # 40 newly human-annotated videos (arriving mid-May 2026)
├── annotations/
│ ├── llava_pedestrian_intent_trainval.json # LLaVA-format QA pairs (trainval)
│ └── llava_pedestrian_intent_test.json # (coming with the test set)
└── scripts/
├── generate_llava_annotations.py # convert PSI annotations → LLaVA format
└── split_clips_to_frames.py # extract frames from video clips
Directory details
| Path | Description |
|---|---|
trainval/videos/ |
204 mp4 clips; videos 0001–0146 from the original PSI 2.0 TrainVal split, 0147–0204 from the original Test split |
trainval/annotations/ |
JSON files containing per-pedestrian crossing intent labels, bounding boxes, and multi-annotator reasoning descriptions |
test/ |
Placeholder — 40 newly human-annotated dashcam videos arriving mid-May 2026 |
annotations/ |
LLaVA-format instruction-tuning JSONs generated by generate_llava_annotations.py |
scripts/ |
Utility scripts for frame extraction and LLaVA annotation generation |
Annotation Format
Original PSI annotation schema
Each video has two annotation types under trainval/annotations/:
cognitive_annotation_key_frame/<video_id>/pedestrian_intent.json
Annotators label pedestrian crossing intent at key frames; labels forward-fill to subsequent frames until the next key frame.
pedestrians:
<track_id>:
observed_frames: [list of frame indices where pedestrian is visible]
cv_annotations:
bboxes: [[x1, y1, x2, y2], ...] # one bbox per observed frame
cognitive_annotations:
<annotator_id>:
intent: ["cross" | "not_cross" | "not_sure" | ""] # per observed frame
key_frame: [1 | 0] # 1 = label set here
description: ["free-text reasoning at key frames", ...]
Intent labels:
| Label | Meaning |
|---|---|
cross |
Pedestrian intends to cross in front of the vehicle |
not_cross |
Pedestrian does not intend to cross |
not_sure |
Annotator is uncertain |
cognitive_annotation_key_frame/<video_id>/driving_decision.json
Per-frame driving speed and direction decisions from human annotators (e.g., maintainSpeed / goStraight).
cv_annotation/<video_id>/cv_annotation.json
Computer vision annotations: per-frame bounding boxes for all tracked objects (pedestrians, cars, etc.) with object type and track IDs.
LLaVA Format
annotations/llava_pedestrian_intent_trainval.json contains instruction-tuning QA pairs generated via a sliding window over each pedestrian track. The label at the last observed frame of each window is used as the ground truth (forward-filled from key frames).
Window defaults: window_size=90 observed frames, window_step=45.
Example entry:
{
"id": "video_0001_ped_1_w0-89_annotator_0",
"video": "trainval/videos/video_0001.mp4",
"window_start": 0,
"window_end": 89,
"conversations": [
{
"from": "human",
"value": "<video>\nObserving frames 0 to 89, will pedestrian ped_1 cross the road in front of the ego vehicle?"
},
{
"from": "gpt",
"value": "The pedestrian has the crossing intention."
}
]
}
GPT response strings:
| Intent | Response |
|---|---|
cross |
"The pedestrian has the crossing intention." |
not_cross |
"The pedestrian does not have the crossing intention." |
not_sure |
"It is not sure to tell the pedestrian crossing intention." |
Generating LLaVA Annotations
# Default parameters (window_size=90, window_step=45)
python3 scripts/generate_llava_annotations.py
# Custom sliding window
python3 scripts/generate_llava_annotations.py --window_size 60 --window_step 30
Outputs are written to annotations/ and never modify the original source files.
The trainval split is kept unified in a single JSON. To apply a train/val partition,
filter by video field: videos 0001–0110 → train, 0112–0146 → val
(following the original PSI 2.0 PSI2_split.json groupings).
Data Split
| Split | Videos | Source | Status |
|---|---|---|---|
trainval |
204 | PSI2.0_TrainVal (146) + PSI2.0_Test (58) | Available |
test |
40 | Newly human-annotated dashcam videos | Coming mid-May 2026 |
Data Use Agreement
By accessing this dataset, you agree to the terms of the TASI Benchmark Data Sharing Agreement.
This dataset is intended for academic and non-commercial research purposes only.
Citation
If you use this dataset, please cite the original PSI paper:
@article{chen2021psi,
title={PSI: A Pedestrian Behavior Dataset for Socially Intelligent Autonomous Car},
author={Chen, Tina and Tian, Renran and Domeyer, Joshua and Sherony, Rini and Ohn-Bar, Eshed},
journal={arXiv preprint arXiv:2112.02604},
year={2021}
}
License
This dataset upload is released under the MIT License. Please also comply with the license terms of the original PSI dataset.
- Downloads last month
- 22