script-based-tokenizers
Collection
2 items • Updated
A Byte-Level BPE tokenizer trained on ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] data from Fineweb-2-HQ.
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] |
| Target Vocab Size | 128,000 |
| Final Vocab Size | 130,765 |
| Pre-tokenizer | custom:boundless_bpe |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 40 |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_boundless_coverage_AllL_128000")
tokens = tokenizer.encode("Hello, world!")
tokenizer.json — Full HuggingFace tokenizervocab.json — Vocabulary mappingmerges.txt — BPE merge rules| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ello, ,, Ġworld, !, Ġ, 123, 45, ĠThis, Ġis, Ġa, Ġtest, ., Ġãģ, ĵ, ãĤ, ĵ, ãģ«, ãģ, ¡ |
42, 54335, 14, 27328, 3, 223, 24137, 3871, 18980, 3996, 1197, 38098, 16, 44396, 244, 18464, 244, 82907, 11232, 97 |