Overview

We pursue lower rejection rates while exploring lower KL divergence to maximally preserve model intelligence.

We provide two versions to balance censorship removal and capability preservation:

  • Abliterated Version (Refusal: 4/100, KL: 0.4096) – No refusal scenarios were triggered in manual testing.
  • Balanced Version (Refusal: 8/100, KL: 0.2446) – May show refusal tendencies on extremely aggressive prompts, but can be corrected via system/user prompts. Theoretically preserves more of the original model's intelligence due to lower KL divergence.

Logic tests show no visibly degraded intelligence compared to the official version. For more stable outputs, system prompts or user prompts can be used for constraints and guidance.

ABLiteration Approach

This model uses the Heretic ABLiteration method for neural direction ablation:

  1. Identify Refusal Direction - Train a LoRA on harmful behavior datasets to identify neural directions controlling "refusal behavior"
  2. Direction Extraction - Extract the "refusal vector" from the trained LoRA
  3. Ablative Removal - Subtract this direction from the original model weights, removing the censorship mechanism

This method only modifies model weights without changing the architecture or adding inference overhead.

For detailed technical principles, refer to: Heretic Abliteration

Data Sources

Purpose Dataset
Refusal Direction Identification mlabonne/harmful_behaviors (520 prompts)
KL Evaluation General prompts (100 prompts)
Refusal Rate Testing mlabonne/harmful_behaviors (520 prompts)

✅ Recommended Uses

  • Research and analysis of sensitive topics
  • Safety testing and red-teaming exercises
  • Academic research on model alignment
  • Multi-modal tasks (image + text) with Gemma 4 vision capabilities

❌ Not Recommended For

  • Production environments requiring content moderation
  • Applications targeting minors
  • Scenarios with potential legal risks

Limitations

  1. Minor Capability Loss - KL divergence indicates moderate modification, which may slightly affect performance on complex tasks
  2. User Discretion Required - Users must independently judge the appropriateness of generated outputs
  3. Vision Model Unmodified - The vision encoder remains unchanged from the original Gemma 4

Disclaimer

⚠️ Important: This model is intended for research and educational purposes only.

  • This model has had its censorship mechanisms removed and may generate harmful, dangerous, or inappropriate content
  • Users assume all risks associated with usage
  • Do not use this model for illegal activities, harming others, or any inappropriate purposes
  • The model authors are not liable for any indirect, incidental, or consequential damages

Acknowledgments

Downloads last month
6,934
GGUF
Model size
31B params
Architecture
gemma4
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LiconStudio/Gemma-4-31B-it-abliterated-GGUF

Quantized
(82)
this model