PersonaNexus Voice Pack Adapters
Weight-level personality modules for language models. Each adapter is a LoRA fine-tune that makes SmolLM2-360M produce text in a specific author's distinctive voice.
Unlike system prompts, voice packs modify the model's weights โ producing deeper style transfer that resists drift by up to 49% compared to prompt-only approaches.
Available Voice Packs
Theology & Philosophy
| Pack | Author | Style | Corpus |
|---|---|---|---|
| aquinas | St. Thomas Aquinas | Systematic, scholastic, Q&A articles | Summa Theologica (547K words) |
| augustine | St. Augustine | Introspective, rhetorical, narrative | Confessions, City of God (554K words) |
| chesterton | G.K. Chesterton | Witty, paradoxical, accessible | Orthodoxy, Heretics (332K words) |
Literary Fiction
| Pack | Author | Style | Corpus |
|---|---|---|---|
| hemingway | Ernest Hemingway | Sparse, declarative, dialogue-heavy | A Farewell to Arms, The Sun Also Rises (93K words) |
| austen | Jane Austen | Regency social, character-driven | Pride and Prejudice, Emma (404K words) |
| tolkien-adjacent | Lord Dunsany / William Morris | Archaic fantasy, epic prose | King of Elfland's Daughter, Well at World's End (262K words) |
Historical
| Pack | Author | Style | Corpus |
|---|---|---|---|
| lincoln | Abraham Lincoln | Eloquent, principled, plainspoken | Collected Writings (885K words) |
| shakespeare | William Shakespeare | Poetic, dramatic, iambic | Complete Works (935K words) |
| dickens | Charles Dickens | Vivid, satirical, ornate | 7 novels (1.5M words) |
Quick Start
pip install mlx-lm
# Generate with a voice pack
mlx_lm.generate --model HuggingFaceTB/SmolLM2-360M --adapter-path jcrowan3/voice-pack-adapters/aquinas/360m --max-tokens 200 --temp 0.7 --prompt "Whether the soul is immortal"
Key Research Findings
From 900+ generations with statistical significance (5 runs per condition):
- LoRA beats prompt-only in 6/8 comparisons across two model sizes
- 49% less personality drift over 1000-token generation vs base model
- Adapter blending creates hybrid personalities better than either source
- Cross-domain validated across theology and literary fiction
- Minimum 100K words of training data for usable voice packs
LoRA vs Prompt-Only (360M)
| Voice | LoRA Repetition | Prompt-Only | Improvement |
|---|---|---|---|
| Newman | 0.124 | 0.244 | 49% better |
| Augustine | 0.192 | 0.285 | 33% better |
| Chesterton | 0.238 | 0.285 | 16% better |
Training Details
- Base model: SmolLM2-360M (MIT license)
- Method: LoRA, 12 adapter layers, 1000 iterations
- Hardware: Apple Silicon M4, 64GB unified memory
- Framework: mlx-lm
- Training time: 15-20 minutes per voice pack
- Corpus: All public domain (Project Gutenberg, NewAdvent.org, Vatican.va)
Links
- Code & Research: github.com/PersonaNexus/voice-packs
- PersonaNexus Framework: github.com/PersonaNexus/personanexus
- Full Research Summary: RESEARCH_SUMMARY.md
License
MIT โ adapter weights are derivative of MIT-licensed base models trained on public domain texts.
Hardware compatibility
Log In to add your hardware
Quantized
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for jcrowan3/voice-pack-adapters
Base model
HuggingFaceTB/SmolLM2-360M