convergent-llama-300M-muon-addition_3digit

A 300M-parameter language model trained from scratch on deqing/addition_dataset as part of the Convergent Evolution project, which investigates how Fourier features emerge in LLM number embeddings.

Model details

Architecture LLaMA-style Transformer (12 layers, 1024 hidden, 16 heads, GQA)
Parameters ~300M
Optimizer Muon (for 2D weights) + AdamW (for embeddings/bias/norm)
Data perturbation 3-digit addition data (operands 0–999)
Training data deqing/addition_dataset
Context length 1024
Tokenizer Llama 3 (128K vocab)
Batch size 512 sequences

Usage

from transformers import AutoModelForCausalLM

# Load final checkpoint
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-addition_3digit")

Training dynamics

Intermediate checkpoints are saved as branches: tokens-200M, tokens-400M, ..., tokens-5.0B.

# Load intermediate checkpoint (e.g., at 1B tokens)
model = AutoModelForCausalLM.from_pretrained("deqing/convergent-llama-300M-muon-addition_3digit", revision="tokens-1B")

Citation

Paper forthcoming.

Downloads last month
1,077
Safetensors
Model size
0.3B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train deqing/convergent-llama-300M-muon-addition_3digit

Collection including deqing/convergent-llama-300M-muon-addition_3digit