High Quality Uncensored - GGUF on MLX
These are the empirically proven highest quality uncensored models on MLX.
Text Generation • 13B • Updated • 952 • 4Note HarmBench: 96.2% (308/320) JANG_2L CRACK -- This model -- 43 GB -- 45 tok/s -- MMLU: 95.7% (+9.7% improved!) VS JANG_2L (base) -- Unmodified JANG -- 43 GB -- 46 tok/s -- MMLU: 86% VS MLX 4-bit -- Stock mlx_lm cannot load Nemotron (LatentMoE unsupported) VS MLX 3-bit — doesn’t work
-
dealignai/Gemma-4-26B-A4B-JANG_2L-CRACK
Image-Text-to-Text • 3B • Updated • 6.24k • 29
dealignai/Qwen3.5-VL-397B-A17B-UNCENSORED-JANG_1L
Image-Text-to-Text • 34B • Updated • 867 • 4Note HarmBench: 96.2% (308/320) JANG_1L CRACK -- This model -- 112 GB -- 33 tok/s -- MMLU: 88.9% VS JANG_1L (base) -- Unmodified JANG -- 112 GB -- 36 tok/s -- MMLU: 87% VS MLX 4-bit -- Cannot run (250+ GB RAM needed)
dealignai/MiniMax-M2.5-UNCENSORED-JANG_2L
Text Generation • 19B • Updated • 940 • 3Note HarmBench: 98.1% (314/320) JANG_2L + CRACK -- This model -- 63 GB -- MMLU: ~84.7% vs JANG_2L (base) -- Unmodified JANG -- 63 GB -- MMLU: 74.5% vs MLX 4-bit -- Broken (~random) -- 120 GB -- MMLU: 26.5%
dealignai/Qwen3.5-VL-122B-A10B-UNCENSORED-JANG_2S
Image-Text-to-Text • 11B • Updated • 1.1k • 1Note HarmBench: 91.2% (292/320) JANG_2S + CRACK -- This model -- 35 GB -- MMLU: 77.5% VS JANG_2S -- (base, no CRACK) -- 38 GB -- - MMLU: 79% VS MLX 4-bit -- (base, no CRACK) -- 64 GB -- MMLU: 85% VS MLX 2-bit - (base, no CRACK) -- 36 GB -- - MMLU: 56.5%
dealignai/Qwen3.5-VL-9B-JANG_4S-CRACK
Image-Text-to-Text • 2B • Updated • 800 • 2Note HarmBench: 72.5% (232/320) JANG_4S CRACK -- This model -- 6 GB -- MMLU: 70.8% VS JANG_4S (base) -- Unmodified JANG -- 6 GB -- MMLU: 73.0% VS MLX 4-bit -- Uniform quant -- 4.7 GB -- MMLU: 72.5%
dealignai/Qwen3.5-VL-35B-A3B-JANG_4K-CRACK
Image-Text-to-Text • 5B • Updated • 817 • 1Note HarmBench: 98.4% (315/320) JANG_4K CRACK -- This model -- 18 GB -- MMLU: 69.2% -- 110 tok/s vs JANG_4K (base) -- Unmodified JANG -- 18 GB -- MMLU: 70.8% -- 110 tok/s VS JANG_2S (base) -- Lower precision -- 11 GB -- MMLU: ~65% -- ~120 tok/s VS MLX 4-bit -- Uniform quant -- 20 GB -- MMLU: ~60% -- ~85 tok/s
dealignai/Qwen3.5-VL-122B-A10B-JANG_4K-CRACK
Image-Text-to-Text • 18B • Updated • 1k • 2Note HarmBench: 78.4% (251/320) JANG_4K CRACK -- 62 GB -- MMLU: 81.5% vs JANG_4K Base -- 69 GB -- MMLU: 86% vs JANG_2S CRACK -- 35 GB -- MMLU: 77.5% vs MLX 2-bit -- 36 GB -- MMLU: 56.5% vs MLX 4-bit -- 64 GB -- MMLU: 85%
dealignai/Qwen3.5-VL-27B-JANG_4S-CRACK
Image-Text-to-Text • 5B • Updated • 631 • 1Note HarmBench: 75.0% (240/320) JANG_4S CRACK -- This model -- 16 GB -- 27 tok/s -- MMLU: 83.1% VS JANG_4S (base) -- Unmodified JANG -- 16 GB -- 35 tok/s -- MMLU: 84.5% VS MLX 4-bit -- Uniform quant -- 14 GB -- 20 tok/s -- MMLU: 84.5% VS MLX 8-bit -- 2x larger -- 29 GB -- ~15 tok/s -- MMLU: ~86%
dealignai/Qwen3.5-VL-4B-JANG_4S-CRACK
Image-Text-to-Text • 1B • Updated • 700 • 4Note HarmBench: 91.2% (292/320) JANG_4S CRACK -- This model -- 3 GB -- 134 tok/s -- MMLU: 63.1% (+6.2% improved!) VS JANG_4S (base) -- Unmodified JANG -- 3 GB -- 134 tok/s -- MMLU: 67.5% VS MLX 4-bit -- Uniform quant -- 2.2 GB -- ~100 tok/s -- MMLU: 67.0%
dealignai/Nemotron-3-Super-120B-A12B-JANG_4M-CRACK
Text Generation • 18B • Updated • 491 • 1Note HarmBench: 90.3% (289/320) JANG_4M CRACK -- This model -- 63 GB -- ~40 tok/s -- MMLU: 94.2% VS JANG_4M (base) -- Unmodified JANG -- 63 GB -- ~42 tok/s VS JANG_2L CRACK -- 43 GB -- 96.2% HarmBench -- MMLU: 95.7% VS MLX 4-bit -- Stock mlx_lm cannot load Nemotron (LatentMoE unsupported)
dealignai/Qwen3.5-VL-397B-A17B-JANG_2L-CRACK
Image-Text-to-Text • 54B • Updated • 258 • 1Note HarmBench: 98.4% (315/320) JANG_2L CRACK -- This model -- 187 GB -- MMLU: 86.5% VS JANG_2L (base) -- Unmodified JANG -- 187 GB -- MMLU: 87% VS JANG_1L CRACK -- 112 GB -- MMLU: 88.9% -- HarmBench: 96.2%
dealignai/Nemotron-Cascade-2-30B-A3B-JANG_2L-CRACK
Text Generation • 3B • Updated • 365 • 2Note HarmBench: 99.7% (319/320) JANG_2L CRACK -- This model -- 10 GB -- 121 tok/s -- MMLU: 66.8% (with reasoning) Fits on 16 GB Macs. Fastest CRACK model.
dealignai/Nemotron-Cascade-2-30B-A3B-UNCENSORED-JANG_2L
Text Generation • 5B • Updated • 658 • 2Note HarmBench: 99.4% (318/320) JANG_4M CRACK -- This model -- 17 GB -- 127 tok/s -- MMLU: 82.7% (with reasoning) VS JANG_4M (base) -- 17 GB -- MMLU: 88% VS JANG_2L CRACK -- 10 GB -- MMLU: 66.8% -- HarmBench: 99.7%
dealignai/Mistral-Small-4-119B-JANG_4M-CRACK
Image-Text-to-Text • 19B • Updated • 334 • 2Note HarmBench: 95.3% (305/320) JANG_4M CRACK -- 64 GB -- MMLU: 90.9% (189/208) VS JANG_2L CRACK -- 37 GB -- MMLU: 89.9% (187/208)
-
dealignai/Gemma-4-26B-A4B-JANG_4M-CRACK
Image-Text-to-Text • 5B • Updated • 7.87k • 33
dealignai/Mistral-Small-4-Uncensored-JANG_2L
Image-Text-to-Text • Updated • 1Note HarmBench: 95.9% (307/320) JANG_2L CRACK -- 37 GB -- MMLU: 89.9% (187/208) VS ANG_4M CRACK -- 64 GB -- MMLU: 90.9% (189/208)
dealignai/Mistral-Small-4-119B-JANG_2L-CRACK
Image-Text-to-Text • 12B • Updated • 415 • 2Note HarmBench: 95.9% (307/320) JANG_2L CRACK -- 37 GB -- MMLU: 89.9% (187/208) VS JANG_4M CRACK -- 64 GB -- MMLU: 90.9% (189/208)
-
dealignai/MiniMax-M2.5-JANG_3L-CRACK
Text Generation • 26B • Updated • 840 • 2