LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 60
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Fine-tuning GLM-OCR 0.9B (CogViT encoder + GLM-0.5B decoder) for Korean financial document table recognition using LoRA via LLaMA-Factory.
| Metric | Target |
|---|---|
| TEDS (2-level nested) | >= 90% |
| TEDS (3-level nested) | >= 85% |
| Korean CER | <= 1% |
| Latency | <= 0.5s/page |
glm_ocr_finetuning/
├── config/ # Training/eval YAML configs
│ ├── training_config.yaml # LoRA + LLaMA-Factory config
│ ├── preprocessing_config.yaml
│ └── evaluation_config.yaml
│
├── data/ # ALL raw source data (gitignored)
│ ├── existing/ # 216 pre-labeled BNK images+labels
│ │ ├── images/*.png
│ │ └── labels/*.html
│ ├── dart/ # 9,375 DART financial tables
│ │ ├── images/*.png
│ │ └── labels/*.html
│ ├── fss/ # 209 FSS tables (flat png+html)
│ ├── wiki/ # 299 Korean Wikipedia tables (flat png+html)
│ ├── synthetic/ # 1,000 Jinja2-generated tables
│ │ ├── images/*.png
│ │ └── labels/*.html
│ └── synthetic-shilu/ # 3,000 BNK financial tables + metadata.json
│
├── data_collection/ # Data collection & formatting scripts
│ ├── format_training_data.py # Unified formatter (all sources -> JSONL)
│ ├── format_existing_dataset.py # Legacy formatter for existing dataset
│ ├── scrape_dart.py # DART financial reports scraper
│ ├── scrape_fss_stats.py # FSS statistics scraper
│ ├── scrape_wiki_tables.py # Korean Wikipedia table scraper
│ └── synthetic/ # Synthetic data generation
│ ├── generate_synthetic.py # Playwright-based table renderer
│ ├── data_pools.py # Korean vocabulary pools
│ └── templates/ # 16 Jinja2 HTML templates
│
├── dataset/ # Pipeline output (train/val/test splits)
│ ├── train.jsonl # 12,017 training entries
│ ├── val.jsonl # 2,123 validation entries
│ ├── test.jsonl # 33 test entries
│ ├── training_data.jsonl # Combined 14,140 entries
│ ├── dataset_stats.csv # Per-entry statistics
│ └── llamafactory/ # LLaMA-Factory ShareGPT format (gitignored)
│ ├── train_sharegpt.json
│ ├── val_sharegpt.json
│ └── dataset_info.json
│
├── preprocessing/
│ ├── preprocess.py # Image preprocessing (DPI, deskew, CLAHE)
│ └── postprocess.py # HTML postprocessing (repair, Korean correction)
│
├── evaluation/
│ ├── metrics/
│ │ ├── teds.py # Tree-Edit-Distance-based Similarity
│ │ ├── cer_wer.py # Character/Word Error Rate
│ │ ├── table_f1.py # Table F1 score
│ │ └── nesting_depth.py # Nesting depth accuracy
│ └── test_set/ # Curated test images (TBD)
│
├── training/
│ ├── train.py # Main training entry point
│ ├── direct_trainer.py # Direct HuggingFace trainer
│ └── convert_dataset.py # JSONL -> LLaMA-Factory converter
│
├── evaluate.py # Evaluation runner
└── outputs/ # Training outputs (gitignored)
14,099 unique entries (14,140 after oversampling for balance), from 6 sources:
| Source | Count | Description |
|---|---|---|
existing |
216 | Pre-labeled BNK reference dataset (augmented) |
dart |
9,375 | DART financial disclosure tables (scraped) |
fss |
209 | FSS statistical tables |
wiki |
299 | Korean Wikipedia tables |
synthetic |
1,000 | Jinja2-generated tables (Playwright rendered) |
synthetic-shilu |
3,000 | BNK financial tables (16 templates, 2/3-level + merged) |
Category distribution:
| Category | Count | % |
|---|---|---|
| plain | 3,675 | 26.0% |
| merged (rowspan/colspan) | 9,300 | 65.8% |
| nested_2level | 715 | 5.1% |
| nested_3level | 450 | 3.2% |
Splits: Train 12,017 / Val 2,123 / Test 33
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/data_collection/format_training_data.py \
--sources all \
--output backend/scripts/glm_ocr_finetuning/dataset/
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/training/convert_dataset.py \
--input backend/scripts/glm_ocr_finetuning/dataset/train.jsonl \
backend/scripts/glm_ocr_finetuning/dataset/val.jsonl \
--output backend/scripts/glm_ocr_finetuning/dataset/llamafactory/
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/training/train.py --dry-run
pip install llamafactory
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/training/train.py --mode llamafactory
Each entry in train.jsonl / val.jsonl:
{
"image_path": "backend/scripts/glm_ocr_finetuning/data/dart/images/dart_xxx.png",
"ground_truth_html": "<table><tr><td>...</td></tr></table>",
"nesting_level": 1,
"source": "dart",
"category": "merged",
"metadata": {
"original_filename": "dart_xxx.png",
"validation": {
"table_count": 1,
"row_count": 15,
"cell_count": 60,
"has_rowspan": true,
"has_colspan": true,
"text_length": 500
}
}
}
From config/training_config.yaml:
| Parameter | Value |
|---|---|
| Base model | zai-org/GLM-OCR |
| LoRA rank | 8 |
| LoRA alpha | 16 |
| LoRA targets | all linear layers |
| Batch size | 2 (effective 16 with grad accum) |
| Learning rate | 1e-4 (cosine schedule) |
| Epochs | 3 |
| Precision | bf16 |
| Max seq length | 4096 |
data/<source_name>/ (flat *.png + *.html pairs, or images/ + labels/ subdirs)data_collection/format_training_data.py:load_flat_source()load_scraped_source()SOURCE_LOADERS dict--sources choices| Script | Source | Output | Notes |
|---|---|---|---|
scrape_dart.py |
DART API | data/dart/ |
Uses OpenDART API, renders via Playwright |
scrape_fss_stats.py |
FSS Korea | data/fss/ |
Flat png+html pairs |
scrape_wiki_tables.py |
Korean Wikipedia | data/wiki/ |
Flat png+html pairs |
synthetic/generate_synthetic.py |
Jinja2 templates | data/synthetic/ |
16 templates, async Playwright |
format_training_data.py |
All sources | dataset/ |
Unified formatter with validation + dedup |
# Run evaluation on test set with live model
backend/.venv/bin/python backend/scripts/glm_ocr_finetuning/evaluate.py \
--test-set backend/scripts/glm_ocr_finetuning/dataset/test.jsonl \
--live --endpoint http://localhost:8001
data/ directory: All 6 raw data sources under one gitignored directory for consistency