Convergent Evolution (Addition)
Collection
6 items • Updated
A 300M-parameter language model trained from scratch on deqing/addition_dataset as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.
| Architecture | LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA) |
| Parameters | ~300M |
| Optimizer | Muon (for 2D weights) + AdamW (for embeddings/bias/norm) |
| Data perturbation | 3-digit addition data (operands 0–999) |
| Training data | deqing/addition_dataset |
| Context length | 1024 |
| Tokenizer | Llama 3 (128K vocab) |
| Batch size | 512 sequences |
from transformers import AutoModelForCausalLM
# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-addition_3digit")
Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ..., tokens-5.0B.
# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-addition_3digit", revision="tokens-1B")
Paper forthcoming.