Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('imagefolder', {}), NamedSplit('test'): (None, {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

GLM-OCR Fine-Tuning Pipeline

Fine-tuning GLM-OCR 0.9B (CogViT encoder + GLM-0.5B decoder) for Korean financial document table recognition using LoRA via LLaMA-Factory.

Performance Targets

Metric Target
TEDS (2-level nested) >= 90%
TEDS (3-level nested) >= 85%
Korean CER <= 1%
Latency <= 0.5s/page

Directory Structure

glm_ocr_finetuning/
├── config/                          # Training/eval YAML configs
│   ├── training_config.yaml         # LoRA + LLaMA-Factory config
│   ├── preprocessing_config.yaml
│   └── evaluation_config.yaml
│
├── data/                            # ALL raw source data (gitignored)
│   ├── existing/                    #   216 pre-labeled BNK images+labels
│   │   ├── images/*.png
│   │   └── labels/*.html
│   ├── dart/                        #   9,375 DART financial tables
│   │   ├── images/*.png
│   │   └── labels/*.html
│   ├── fss/                         #   209 FSS tables (flat png+html)
│   ├── wiki/                        #   299 Korean Wikipedia tables (flat png+html)
│   ├── synthetic/                   #   1,000 Jinja2-generated tables
│   │   ├── images/*.png
│   │   └── labels/*.html
│   └── synthetic-shilu/             #   3,000 BNK financial tables + metadata.json
│
├── data_collection/                 # Data collection & formatting scripts
│   ├── format_training_data.py      # Unified formatter (all sources -> JSONL)
│   ├── format_existing_dataset.py   # Legacy formatter for existing dataset
│   ├── scrape_dart.py               # DART financial reports scraper
│   ├── scrape_fss_stats.py          # FSS statistics scraper
│   ├── scrape_wiki_tables.py        # Korean Wikipedia table scraper
│   └── synthetic/                   # Synthetic data generation
│       ├── generate_synthetic.py    # Playwright-based table renderer
│       ├── data_pools.py            # Korean vocabulary pools
│       └── templates/               # 16 Jinja2 HTML templates
│
├── dataset/                         # Pipeline output (train/val/test splits)
│   ├── train.jsonl                  # 12,017 training entries
│   ├── val.jsonl                    # 2,123 validation entries
│   ├── test.jsonl                   # 33 test entries
│   ├── training_data.jsonl          # Combined 14,140 entries
│   ├── dataset_stats.csv            # Per-entry statistics
│   └── llamafactory/               # LLaMA-Factory ShareGPT format (gitignored)
│       ├── train_sharegpt.json
│       ├── val_sharegpt.json
│       └── dataset_info.json
│
├── preprocessing/
│   ├── preprocess.py                # Image preprocessing (DPI, deskew, CLAHE)
│   └── postprocess.py               # HTML postprocessing (repair, Korean correction)
│
├── evaluation/
│   ├── metrics/
│   │   ├── teds.py                  # Tree-Edit-Distance-based Similarity
│   │   ├── cer_wer.py               # Character/Word Error Rate
│   │   ├── table_f1.py              # Table F1 score
│   │   └── nesting_depth.py         # Nesting depth accuracy
│   └── test_set/                    # Curated test images (TBD)
│
├── training/
│   ├── train.py                     # Main training entry point
│   ├── direct_trainer.py            # Direct HuggingFace trainer
│   └── convert_dataset.py           # JSONL -> LLaMA-Factory converter
│
├── evaluate.py                      # Evaluation runner
└── outputs/                         # Training outputs (gitignored)

Dataset Summary

14,099 unique entries (14,140 after oversampling for balance), from 6 sources:

Source Count Description
existing 216 Pre-labeled BNK reference dataset (augmented)
dart 9,375 DART financial disclosure tables (scraped)
fss 209 FSS statistical tables
wiki 299 Korean Wikipedia tables
synthetic 1,000 Jinja2-generated tables (Playwright rendered)
synthetic-shilu 3,000 BNK financial tables (16 templates, 2/3-level + merged)

Category distribution:

Category Count %
plain 3,675 26.0%
merged (rowspan/colspan) 9,300 65.8%
nested_2level 715 5.1%
nested_3level 450 3.2%

Splits: Train 12,017 / Val 2,123 / Test 33

Quick Start

1. Format training data (all sources)

backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/data_collection/format_training_data.py \
  --sources all \
  --output backend/scripts/glm_ocr_finetuning/dataset/

2. Convert to LLaMA-Factory format

backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/training/convert_dataset.py \
  --input backend/scripts/glm_ocr_finetuning/dataset/train.jsonl \
         backend/scripts/glm_ocr_finetuning/dataset/val.jsonl \
  --output backend/scripts/glm_ocr_finetuning/dataset/llamafactory/

3. Dry-run training

backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/training/train.py --dry-run

4. Train on GPU (requires LLaMA-Factory + GPU)

pip install llamafactory
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/training/train.py --mode llamafactory

JSONL Schema

Each entry in train.jsonl / val.jsonl:

{
  "image_path": "backend/scripts/glm_ocr_finetuning/data/dart/images/dart_xxx.png",
  "ground_truth_html": "<table><tr><td>...</td></tr></table>",
  "nesting_level": 1,
  "source": "dart",
  "category": "merged",
  "metadata": {
    "original_filename": "dart_xxx.png",
    "validation": {
      "table_count": 1,
      "row_count": 15,
      "cell_count": 60,
      "has_rowspan": true,
      "has_colspan": true,
      "text_length": 500
    }
  }
}

Training Configuration

From config/training_config.yaml:

Parameter Value
Base model zai-org/GLM-OCR
LoRA rank 8
LoRA alpha 16
LoRA targets all linear layers
Batch size 2 (effective 16 with grad accum)
Learning rate 1e-4 (cosine schedule)
Epochs 3
Precision bf16
Max seq length 4096

Adding New Data Sources

  1. Place raw data under data/<source_name>/ (flat *.png + *.html pairs, or images/ + labels/ subdirs)
  2. Add a loader in data_collection/format_training_data.py:
    • Flat structure (png+html side by side): use load_flat_source()
    • Subdirectory structure (images/ + labels/): use load_scraped_source()
  3. Register in SOURCE_LOADERS dict
  4. Add to argparse --sources choices
  5. Re-run the pipeline (steps 1-3 from Quick Start)

Data Collection Scripts

Script Source Output Notes
scrape_dart.py DART API data/dart/ Uses OpenDART API, renders via Playwright
scrape_fss_stats.py FSS Korea data/fss/ Flat png+html pairs
scrape_wiki_tables.py Korean Wikipedia data/wiki/ Flat png+html pairs
synthetic/generate_synthetic.py Jinja2 templates data/synthetic/ 16 templates, async Playwright
format_training_data.py All sources dataset/ Unified formatter with validation + dedup

Evaluation

# Run evaluation on test set with live model
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/evaluate.py \
  --test-set backend/scripts/glm_ocr_finetuning/dataset/test.jsonl \
  --live --endpoint http://localhost:8001

Key Design Decisions

  1. Single data/ directory: All 6 raw data sources under one gitignored directory for consistency
  2. GLM-OCR 0.9B: Lightweight vision-language model purpose-built for table recognition
  3. LLaMA-Factory: Standardized SFT framework with ShareGPT format for easy training
  4. LoRA (rank 8): Efficient fine-tuning, fits on consumer GPUs
  5. Korean-only training data: Optimized for Korean financial documents
  6. Flat + subdirectory loaders: Flexible data ingestion supporting both file layouts

Team

  • Omar: Model infrastructure, training pipeline, preprocessing/postprocessing
  • Shilu: Data collection (FSS, Wikipedia, synthetic-shilu 3,000 tables)

References

Downloads last month
65

Papers for omarelsherif010/glm-ocr-bnk-finetuning