Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
256
256
End of preview. Expand in Data Studio

PoultryVision Unified Dataset

A large-scale, multi-modal poultry-farm dataset unifying six public sources for detection, classification, multi-camera tracking and behavior analysis of chickens (broilers, hens, cocks) and eggs.

This dataset was built to train Williamsanderson/PoultryVision, a YOLOv11m model that beats the fine-tuned YOLOv11x reported by Cardoen et al. (MVBroTrack paper, 2025) by +8.5 mAP@50-95.


πŸ“Š Dataset at a glance

Object detection (YOLO format)

Split Images
Train 15 987
Val 3 706
Test 1 893
Total 21 586

Image classification

Split Images
Train 1 832
Val 444
Test 263
Total 2 539

Videos & multi-camera

  • 24 MP4 videos from 4 synchronized cameras (cam 9 / 10 / 11 / 12) across 6 samples
  • Camera calibration files (intrinsics + extrinsics) for every camera
  • Reprojection masks defining the ground-plane region of interest
  • Tracking ground truth on the ground plane
  • Pre-computed YOLO detections per frame for every multi-view sample

Classes (detection)

ID Name Description
0 chicken All poultry: broilers, hens, cocks
1 egg Chicken eggs

πŸ“š Source datasets

# Source Type Link / Reference
1 Dataset Chicken 1 Classification (images.cv) images.cv
2 Dataset Chicken 2 Classification (images.cv) images.cv
3 Dataset Chicken 3 Detection (COCO, Roboflow) Roboflow Universe
4 Chickens-Eggs v1 Detection (YOLOv8, Roboflow) Roboflow Universe
5 chicken eggs 2 v3 Detection + segmentation Roboflow Universe
6 MVBroTrack Multi-camera broiler tracking Cardoen et al., Computers and Electronics in Agriculture, 2025

All six sources were standardized to a unified YOLO detection format (and/or ImageFolder classification format), deduplicated, and split into train/val/test.


πŸ—‚οΈ Folder structure

PoultryVision-Dataset/
β”œβ”€β”€ data.yaml                      # Ultralytics-compatible dataset config
β”œβ”€β”€ images/                        # Detection images
β”‚   β”œβ”€β”€ train/   (15 987)
β”‚   β”œβ”€β”€ val/     ( 3 706)
β”‚   └── test/    ( 1 893)
β”œβ”€β”€ labels/                        # YOLO .txt labels (one per image)
β”‚   β”œβ”€β”€ train/
β”‚   β”œβ”€β”€ val/
β”‚   └── test/
β”œβ”€β”€ classification/                # ImageFolder layout
β”‚   β”œβ”€β”€ train/<class>/*.jpg
β”‚   β”œβ”€β”€ val/<class>/*.jpg
β”‚   └── test/<class>/*.jpg
β”œβ”€β”€ videos/                        # 24 MP4 videos (4 cameras Γ— 6 samples)
β”œβ”€β”€ calibrations/                  # Camera calibration (intrinsics + extrinsics)
β”‚   └── cam_<id>/
β”‚       β”œβ”€β”€ intrinsics/{cameraMatrix.txt, distCoeffs.txt}
β”‚       └── extrinsics/{rvec.txt, tvec.txt}
β”œβ”€β”€ multi_view_detection/          # Pre-computed per-frame YOLO detections
β”œβ”€β”€ reprojection_masks/            # Ground-plane ROI masks
└── tracking_gt/                   # Multi-camera tracking ground truth

πŸš€ Quick start

Download

pip install huggingface_hub
hf download --repo-type dataset Williamsanderson/PoultryVision-Dataset --local-dir PoultryVision-Dataset

Train a YOLO detector

from ultralytics import YOLO
model = YOLO("yolo11m.pt")
model.train(
    data="PoultryVision-Dataset/data.yaml",
    epochs=70,
    imgsz=640,
    batch=16,
    optimizer="AdamW",
    lr0=0.001,
)

Reference YOLO recipe that produced the published model:

model: yolo11m.pt
epochs: 70
imgsz: 640
optimizer: AdamW
lr0: 0.001
hsv_h: 0.015
hsv_s: 0.7
hsv_v: 0.4
mosaic: 1.0
mixup: 0.1
close_mosaic: 10
auto_augment: randaugment

Image classification

from torchvision.datasets import ImageFolder
from torchvision import transforms

tfm = transforms.Compose([
    transforms.Resize((224, 224)),
    transforms.ToTensor(),
])
train = ImageFolder("PoultryVision-Dataset/classification/train", transform=tfm)

Multi-camera tracking

Calibration files follow the MVBroTrack paper convention. Each camera folder contains cameraMatrix.txt, distCoeffs.txt, rvec.txt, tvec.txt. The repository ships a full multi-view pipeline (Algorithm 1 & 2 of the paper, Tracking-by-Curve-Matching) β€” see Williamsanderson/PoultryVision model repo.


πŸ† Benchmark

Model trained on this dataset (YOLOv11m, 70 epochs, imgsz 640, AdamW):

Metric Value
mAP@50-95 0.793
mAP@50 0.971
Precision 0.934
Recall 0.934

Compared to the MVBroTrack paper (Cardoen et al., 2025):

Model mAP@50-95 Params
YOLOv11x fine-tuned (paper) 70.8 % 56.9 M
YOLOv11m fine-tuned (ours) 79.3 % 20.1 M

βš–οΈ License

This dataset is released under CC-BY-4.0.

  • The unified packaging, splits and labels harmonization are Β© 2025 Williams Anderson, CC-BY-4.0.
  • Individual source datasets retain their original licenses:
    • MVBroTrack (Cardoen et al., 2025) β€” see the original paper and its data statement
    • Roboflow Universe datasets β€” typically CC-BY-4.0 (check each source)
    • images.cv datasets β€” CC-BY-4.0 / public domain
  • Please cite the original sources if you use the corresponding subsets.

πŸ“š Citation

@misc{williamsanderson_poultryvision_dataset_2025,
  title  = {PoultryVision: A Unified Dataset for Poultry-Farm Computer Vision},
  author = {Williams Anderson},
  year   = {2025},
  howpublished = {\url{https://huggingface.co/datasets/Williamsanderson/PoultryVision-Dataset}},
}

@article{cardoen2025mvbrotrack,
  title   = {Multi-camera detection and tracking for individual broiler monitoring},
  author  = {Cardoen, J. and others},
  journal = {Computers and Electronics in Agriculture},
  year    = {2025}
}

πŸ™ Acknowledgements

  • Cardoen et al. (MVBroTrack) for the multi-camera broiler data
  • Roboflow and images.cv communities for the chicken / egg datasets
  • Ultralytics for the YOLOv11 framework that produced the reference model
Downloads last month
113

Models trained or fine-tuned on Williamsanderson/PoultryVision-Dataset