Important: This model uses the JANG quantization format — the GGUF equivalent for MLX on Apple Silicon. Currently only supported by MLX Studio and the
jang-toolsPython package.
MLX Studio — the only app that natively supports JANG models
Qwen 3.5 VL 397B — JANG_2L + CRACK
JANG mixed-precision · CRACK abliterated · Vision-Language · No guardrails · 187 GB
What Is This?
This is Qwen 3.5 VL 397B — a 397B parameter hybrid SSM/Attention Mixture-of-Experts model with 512 experts (10 active per token), and built-in vision.
It has been:
- JANG quantized — JANG_2L profile (8-bit attention, 6-bit important, 2-3-bit experts) — 187 GB
- CRACK abliterated — permanent weight-level removal of safety refusal
| Architecture | Qwen 3.5 VL MoE — 397B total, ~17B active, 512 experts, hybrid SSM/FA |
| Quantization | JANG_2L (8/6/3/2-bit mixed, 3.72 avg) — 187 GB |
| HarmBench | 98.4% (315/320) |
| Compliance | 8/8 |
| Vision | Yes — via MLX Studio / vMLX |
| Thinking | ON/OFF supported |
| MMLU | 86.5% (180/208) |
| Speed | ~33 tok/s (M4 Ultra 256GB) |
| Fits on | 256 GB Macs |
Also see: JANG_1L version — 112 GB, 96.2% HarmBench (fits on 128 GB Macs)
HarmBench Results
315/320 (98.4%)
| Category | Score | |
|---|---|---|
| Chemical / Biological | 42/42 | 100% |
| Copyright | 80/80 | 100% |
| Cybercrime / Intrusion | 52/52 | 100% |
| Harmful | 18/18 | 100% |
| Misinformation / Disinfo | 53/54 | 98% |
| Illegal | 51/53 | 96% |
| Harassment / Bullying | 19/21 | 90% |
Install & Usage
pip install "jang[mlx]"
from jang_tools.loader import load_jang_model
from mlx_lm import generate
model, tokenizer = load_jang_model("dealignai/Qwen3.5-VL-397B-A17B-JANG_2L-CRACK")
messages = [{"role": "user", "content": "Your prompt here"}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, tokenize=False)
response = generate(model, tokenizer, prompt=prompt, max_tokens=2000)
print(response)
Thinking Mode
Thinking is ON by default. To disable:
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True,
enable_thinking=False, tokenize=False)
About JANG
JANG (Jang Adaptive N-bit Grading) is a mixed-precision quantization format for Apple Silicon — the GGUF equivalent for MLX.
About CRACK
CRACK (Controlled Refusal Ablation via Calibrated Knockouts) removes safety alignment from LLMs at the weight level using per-layer projected vectors from structurally-mirrored prompt pairs.
Links
Disclaimer
This model is provided for research and educational purposes. The creators are not responsible for any misuse. By downloading this model, you agree to use it responsibly and in compliance with applicable laws.
한국어
Qwen 3.5 VL 397B — JANG_2L + CRACK
| 항목 | 내용 |
|---|---|
| 크기 | 187 GB |
| HarmBench | 98.4% (315/320) |
| 최소 요구사양 | 256 GB 메모리 Mac |
pip install "jang[mlx]"
GitHub · HuggingFace · MLX Studio · Ko-fi · X @dealignai
Created by Jinho Jang · 장진호 제작
- Downloads last month
- 258
Quantized

