Uncensored & Abliterated LLMs
Collection
Models with reduced safety guardrails for research purposes. Created using Heretic abliteration. Use responsibly. • 9 items • Updated • 3
An abliterated (uncensored) version of deepseek-ai/DeepSeek-R1-Distill-Qwen-32B — a 32B reasoning model with chain-of-thought capabilities, minus the safety refusals.
This combines DeepSeek-R1's strong reasoning with unrestricted output, making it useful for research requiring step-by-step analysis without artificial limitations.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "richardyoung/Deepseek-R1-Distill-Qwen-32b-uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
messages = [{"role": "user", "content": "Walk me through how RSA encryption works, step by step."}]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
DeepSeek-R1 is one of the strongest open-source reasoning models. The distilled 32B version retains impressive chain-of-thought capabilities at a manageable size. Abliteration allows researchers to study the full range of the model's reasoning abilities without refusal interventions.
Research on reasoning, alignment studies, education, and creative applications requiring step-by-step analysis.