Text Generation
MLX
Safetensors
Rust
qwen2
7b
agentic-coding
android
apple-silicon
attested
bash
c
chain-of-custody
chinese
code
code-completion
code-generation
code-infill
compacted
compensation-lora
consumer-gpu
cpp
cryptographically-verified
css
distillation
edge-inference
efficient
embedded
english
forge-alloy
function-calling
general
general-purpose
go
head-pruning
html
iphone
java
javascript
knowledge-distillation
kotlin
llama-cpp
lm-studio
local-inference
lora
macbook
mobile
multilingual
ollama
on-device
optimized
php
pruned
python
qwen
qwen-coder
qwen2.5
qwen2.5-coder
raspberry-pi
reproducible
ruby
sql
swift
teacher-student
typescript
validation-artifact
versatile
conversational
card: rich prose (about + journey + ablation + stage notes)
Browse files
README.md
CHANGED
|
@@ -64,6 +64,33 @@ license: apache-2.0
|
|
| 64 |
|
| 65 |
---
|
| 66 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
## Benchmarks
|
| 68 |
|
| 69 |
| Benchmark | Score | Base | Δ | Verified |
|
|
@@ -110,9 +137,14 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))
|
|
| 110 |
prune → lora → lora → eval (1 cycles)
|
| 111 |
```
|
| 112 |
|
| 113 |
-
- **Pruning**: 12% heads via activation-magnitude
|
| 114 |
-
- *
|
| 115 |
-
- **
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
- **Hardware**: NVIDIA GeForce RTX 5090
|
| 117 |
- **Forge tool**: [Continuum](https://github.com/CambrianTech/continuum) Factory + [sentinel-ai](https://github.com/CambrianTech/sentinel-ai)
|
| 118 |
|
|
|
|
| 64 |
|
| 65 |
---
|
| 66 |
|
| 67 |
+
## About this model
|
| 68 |
+
|
| 69 |
+
Methodology validation artifact for the v2 forge pipeline + KL-distillation compensation LoRA. Demonstrates that aggressive head pruning + activation-metric importance + pad-mode defrag, when paired with output-distribution distillation against the unmodified teacher, recovers near-base HumanEval capability (61.0 vs 62.2 base, within calibration tolerance). This is the empirical anchor for PLASTICITY-COMPACTION §4.1.3.3 and the loss-function ablation that closes the §4.1.3.2 PPL/HumanEval disconnect. NOT a Pareto improvement over the unmodified base 7B at any single VRAM tier — published as proof that the methodology stack works end-to-end, in preparation for the Qwen3.5-35B-A3B and 397B-A17B forges where the pruning dimension actually wins.
|
| 70 |
+
|
| 71 |
+
## The Journey
|
| 72 |
+
|
| 73 |
+
This artifact is the punchline of a four-run experimental sequence on the same base model. The first run scored **50.0**; the final run scored **61.0**. Each run between them isolated a single variable, and each result narrowed the design space to the structural fix that recovered near-base capability.
|
| 74 |
+
|
| 75 |
+
| Run | Configuration | HumanEval pass@1 |
|
| 76 |
+
|---|---|---|
|
| 77 |
+
| 1 | broken global-flat L2-weight | **50.0** |
|
| 78 |
+
| 2 | layer-normalized activation, 1-cycle 500-step | **54.9** |
|
| 79 |
+
| 3 | layer-normalized activation, 3-cycle (ablation) | **46.3** |
|
| 80 |
+
| 4 | 1-cycle + KL compensation LoRA | **61.0** |
|
| 81 |
+
|
| 82 |
+
## Loss Function Ablation
|
| 83 |
+
|
| 84 |
+
The compensation LoRA was run twice with identical configuration, varying only the distillation loss. The result is a substantive methodology finding in its own right:
|
| 85 |
+
|
| 86 |
+
| Distillation loss | HumanEval | HumanEval+ | Outcome |
|
| 87 |
+
|---|---|---|---|
|
| 88 |
+
| `mse_hidden` | **0.0** | **0.0** | degenerate fixed point — model collapsed to outputting '0' |
|
| 89 |
+
| `kl_logits` | **61.0** | **53.0** | near-base recovery within calibration tolerance |
|
| 90 |
+
|
| 91 |
+
MSE-on-hidden-states has a degenerate fixed point: the student can satisfy the loss by collapsing some downstream computation, regardless of whether the hidden states encode useful information. KL-on-output-logits has none, because matching the teacher's output distribution directly constrains task-level behavior. **For autoregressive language models, distillation must operate at the output layer, not at intermediate residual streams.**
|
| 92 |
+
|
| 93 |
+
|
| 94 |
## Benchmarks
|
| 95 |
|
| 96 |
| Benchmark | Score | Base | Δ | Verified |
|
|
|
|
| 137 |
prune → lora → lora → eval (1 cycles)
|
| 138 |
```
|
| 139 |
|
| 140 |
+
- **Pruning**: 12% heads via `activation-magnitude`, layer-normalized, pad-mode defrag
|
| 141 |
+
> Layer-normalized activation-magnitude head importance (PLASTICITY-COMPACTION §4.1.3.1 fix). Pad-mode defrag preserves the q_proj invariant num_q_heads*head_dim==hidden_size so the artifact loads in llama.cpp (Finding 6 fix from VALIDATED-TENSOR-SURGERY).
|
| 142 |
+
- **lora**: rank ?, 500 steps
|
| 143 |
+
> Single-cycle code-domain LoRA fine-tuning on the pruned student. 1-cycle ablation chosen because the 3-cycle multi-cycle test surfaced the §4.1.3.2 PPL/HumanEval disconnect (54.9 → 46.3 across cycles).
|
| 144 |
+
- **compensation-lora**: rank 16, 500 steps, `kl_logits` distillation against `Qwen/Qwen2.5-Coder-7B`
|
| 145 |
+
> PLASTICITY-COMPACTION §4.1.3.3. KL divergence on output logits is the structural fix for the §4.1.3.2 disconnect. Loss-function ablation: MSE-on-hidden-states collapsed the model to 0.0 (degenerate fixed point); KL-on-logits recovered to 61.0. LoRA adapter merged into student weights at save time so inference-time VRAM and tokens/sec are unchanged from the un-compensated student.
|
| 146 |
+
- **Calibrated evaluation**: anchored against `Qwen2.5-Coder-7B` (published 61.6, measured 62.2, ±3.0pt tolerance)
|
| 147 |
+
> All HumanEval numbers are anchor-calibrated against the unmodified Qwen2.5-Coder-7B base measured on the same hardware/pipeline in the same run. Hard-fail tolerance: ±3.0 points. Anchor delta: +0.6/+0.7 vs Qwen-published 61.6/53.0, deterministic across 6+ independent runs.
|
| 148 |
- **Hardware**: NVIDIA GeForce RTX 5090
|
| 149 |
- **Forge tool**: [Continuum](https://github.com/CambrianTech/continuum) Factory + [sentinel-ai](https://github.com/CambrianTech/sentinel-ai)
|
| 150 |
|