Byte-Level BPE Tokenizer: ['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn'] (16K)
A Byte-Level BPE tokenizer trained on ['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn'] data from Fineweb-2-HQ.
Training Details
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | ['dan_Latn', 'deu_Latn', 'nld_Latn', 'swe_Latn'] |
| Target Vocab Size | 16,000 |
| Final Vocab Size | 16,953 |
| Pre-tokenizer | custom:dan_Latn |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 8 |
Usage
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_script_Germ_16000")
tokens = tokenizer.encode("Hello, world!")
Files
tokenizer.json— Full HuggingFace tokenizervocab.json— Vocabulary mappingmerges.txt— BPE merge rules
Sample Encoding
| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ello, ,, Ġw, orld, !, Ġ, 123, 45, ĠTh, is, Ġis, Ġa, Ġtest, ., Ġ, ãģ, ĵ, ãĤ, ĵ |
42, 13486, 14, 275, 5150, 3, 223, 16446, 3832, 1249, 289, 516, 270, 5190, 16, 223, 3768, 244, 5986, 244 |
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support