Byte-Level BPE Tokenizer: ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] (128K)
A Byte-Level BPE tokenizer trained on ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] data from Fineweb-2-HQ.
Training Details
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | ['arb_Arab', 'ces_Latn', 'cmn_Hani', 'dan_Latn', 'deu_Latn', 'ell_Grek', 'fra_Latn', 'fw_edu', 'hun_Latn', 'ind_Latn', 'ita_Latn', 'jpn_Jpan', 'nld_Latn', 'pol_Latn', 'por_Latn', 'rus_Cyrl', 'spa_Latn', 'swe_Latn', 'tur_Latn', 'vie_Latn'] |
| Target Vocab Size | 128,000 |
| Final Vocab Size | 128,410 |
| Pre-tokenizer | boundless_bpe |
| Number handling | ltr_3digit |
| Contraction handling | True |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 40 |
Usage
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_AllL_128000")
tokens = tokenizer.encode("Hello, world!")
Files
tokenizer.json— Full HuggingFace tokenizervocab.json— Vocabulary mappingmerges.txt— BPE merge rules
Sample Encoding
| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ell, o,Ġ, world, !Ġ, 123, 45, ĠThisĠ, isĠaĠ, test, .Ġ, ãģ, ĵ, ãĤ, ĵ, ãģ«, ãģ, ¡, ãģ, ¯ |
42, 4180, 27833, 35778, 48248, 21162, 4495, 94456, 57600, 35365, 564, 12229, 244, 20619, 244, 110542, 12229, 97, 12229, 110 |
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support