Gemma 4 31B-IT (MLX, 4-bit)
Gemma 4 31B Instruct quantized to 4-bit for Apple Silicon using MLX.
- Base model: google/gemma-4-31B-it
- Quantization: 4-bit, group size 64, affine mode
- Memory: ~17.5 GB
- Speed: ~33 tok/s generation, ~100 tok/s prefill (M3 Ultra)
Important: mlx-lm Bug Fix Required
mlx-lm 0.31.1 has a bug in gemma4_text.py that causes garbage output (repeating tokens). This affects ALL Gemma 4 MLX models.
One-line fix
In your mlx-lm installation, edit mlx_lm/models/gemma4_text.py, find the Attention.__init__ method, and change:
# BEFORE (broken):
self.scale = self.head_dim ** -0.5
# AFTER (fixed):
self.scale = 1.0
Gemma 4 uses Q/K norms instead of scaled dot-product attention, so the attention scale should be 1.0.
To find the file:
python3 -c "import mlx_lm; print(mlx_lm.__file__)"
# Then edit models/gemma4_text.py in that directory
This fix is tracked in PR #1093 on ml-explore/mlx-lm.
Usage
from mlx_lm import load, generate
model, tokenizer = load("Phipper/gemma-4-31b-it-mlx-4bit")
messages = [{"role": "user", "content": "What is the capital of France?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
response = generate(model, tokenizer, prompt=prompt, max_tokens=200)
print(response)
Benchmarks (M3 Ultra, 96GB)
| Metric | Value |
|---|---|
| Prefill speed | ~100 tok/s |
| Generation speed | ~33 tok/s |
| Peak memory | 17.5 GB |
| Quantization | 4-bit (group 64) |
Model Details
Gemma 4 31B features:
- 60 transformer layers with sliding + full attention pattern
- 32 attention heads, 16 KV heads (sliding), 4 global KV heads
- Head dim 256 (sliding), 512 (global)
- K-eq-V attention for global layers
- 262K vocabulary, up to 256K token context
- Built-in reasoning mode (thinking channel)
Credits
- Created by Nate Baranski (@Phipper)
- Model by Google DeepMind
- MLX architecture support by Prince Canuma
- Bug fix, quantization & evaluation: Nate Baranski + Bessemer (Claude Code agent)
- Downloads last month
- 1,752
Model size
5B params
Tensor type
BF16
·
U32 ·
Hardware compatibility
Log In to add your hardware
4-bit
Model tree for Phipper/gemma-4-31b-it-mlx-4bit
Base model
google/gemma-4-31B-it