Important: This model uses the JANG quantization format — the GGUF equivalent for MLX on Apple Silicon. Currently only supported by MLX Studio and the jang-tools Python package.


MLX Studio

MLX Studio App

MLX Studio — the only app that natively supports JANG models


Nemotron Cascade 2 30B — JANG_2L + CRACK

JANG mixed-precision · CRACK abliterated · Mamba + MoE + Attention · No guardrails · 10 GB

Ko-fi


What Is This?

This is NVIDIA Nemotron Cascade 2 30B — a 30B parameter hybrid model with THREE layer types: Mamba-2 SSM + MoE (128 experts, top-6) + Attention. One of the most architecturally advanced small models available.

It has been:

  1. JANG quantized — JANG_2L profile (8-bit attention, 6-bit important, 2-bit experts) — 10 GB
  2. CRACK abliterated — permanent weight-level removal of safety refusal
Architecture Nemotron Cascade 2 — 30B total, ~3B active, 3 layer types
Quantization JANG_2L (8/6/2-bit mixed, 2.3 avg) — 10 GB
HarmBench 99.7% (319/320)
MMLU 66.8% (139/208)
Speed ~121 tok/s (M4 Ultra 256GB)
Thinking ON/OFF supported (ChatML)
Fits on 16 GB+ Macs

Also see: JANG_4M version — 17 GB, 99.4% HarmBench, 82.7% MMLU (fits on 32 GB Macs)


HarmBench Results

319/320 (99.7%)

Category Score
Auth Bypass 100/100 100%
Cloud Exploits 100/100 100%
Covering Tracks 20/20 100%
API Hacking 99/100 99%

CRACK vs Base

CRACK Base JANG_2L
MMLU (with thinking) 66.8% ~68%
MMLU (no thinking) 49.0% 51.0%
HarmBench 99.7% 0%
Speed ~121 tok/s ~125 tok/s

Surgery impact on reasoning: minimal (-2% no-think, ~-1% with thinking).

JANG_2L CRACK vs JANG_4M CRACK vs JANG_2L Base

JANG_2L CRACK JANG_2L Base JANG_4M CRACK
Size 10 GB 10 GB 17 GB
MMLU 66.8% ~68% 82.7%
HarmBench 99.7% 0% 99.4%
Speed ~121 tok/s ~125 tok/s ~127 tok/s
Fits on 16 GB Mac 16 GB Mac 32 GB Mac

Note: There is no standard MLX quantization for Nemotron Cascade 2. The nemotron_h architecture (Mamba + MoE + Attention hybrid) is only supported by JANG format via MLX Studio and jang-tools.

MMLU Results (with reasoning recovery)

139/208 (66.8%) — no-think 102/208 + thinking recovered 37

Subject Score
HS Biology 14/16 88%
HS Geography 12/16 75%
World Religions 11/16 69%
Conceptual Physics 11/16 69%
Electrical Engineering 10/16 62%
College Physics 9/16 56%
Formal Logic 9/16 56%
College Mathematics 6/16 38%
College CS 4/16 25%
HS Mathematics 4/16 25%
Machine Learning 3/16 19%
Professional Medicine 7/16 44%
Abstract Algebra 2/16 12%

Scores shown are no-think pass. Thinking recovery improved total from 49.0% to 66.8%.


Install & Usage

pip install "jang[mlx]"
from jang_tools.loader import load_jang_model
from mlx_lm import generate

model, tokenizer = load_jang_model("dealignai/Nemotron-Cascade-2-30B-A3B-JANG_2L-CRACK")

messages = [{"role": "user", "content": "Your prompt here"}]
prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, tokenize=False)

response = generate(model, tokenizer, prompt=prompt, max_tokens=2000)
print(response)

Thinking Mode

Thinking is ON by default. To disable:

prompt = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True,
    enable_thinking=False, tokenize=False)

About JANG

JANG (Jang Adaptive N-bit Grading) is a mixed-precision quantization format for Apple Silicon — the GGUF equivalent for MLX.

About CRACK

CRACK (Controlled Refusal Ablation via Calibrated Knockouts) removes safety alignment from LLMs at the weight level using per-layer projected vectors from structurally-mirrored prompt pairs.


Links

Ko-fi X/Twitter GitHub MLX Studio Website


Disclaimer

This model is provided for research and educational purposes. The creators are not responsible for any misuse. By downloading this model, you agree to use it responsibly and in compliance with applicable laws.


한국어

Nemotron Cascade 2 30B — JANG_2L + CRACK

항목 내용
크기 10 GB
HarmBench 99.7% (319/320)
MMLU 66.8% (139/208)
속도 ~121 tok/s (M4 Ultra)
최소 요구사양 16 GB 메모리 Mac
pip install "jang[mlx]"

GitHub · HuggingFace · MLX Studio · Ko-fi · X @dealignai


Created by Jinho Jang · 장진호 제작

Downloads last month
365
Safetensors
Model size
3B params
Tensor type
U32
·
F16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for dealignai/Nemotron-Cascade-2-30B-A3B-JANG_2L-CRACK

Finetuned
(12)
this model

Collections including dealignai/Nemotron-Cascade-2-30B-A3B-JANG_2L-CRACK