Qwen2.5-Coder-14B-Instruct — MLX 4-bit

4-bit MLX conversion of Qwen2.5-Coder-14B-Instruct for Apple Silicon. Runs natively on M1, M2, M3, and M4 Macs via mlx_lm. Faithful port of the upstream weights — no fine-tuning, no merge.

Quick facts

  • Upstream base: Qwen2.5-Coder-14B-Instruct
  • Format: MLX 4-bit (mlx_lm.convert --q-bits 4 --q-group-size 64)
  • License: apache-2.0
  • Use case: code
  • Runtime: mlx-lm, LM Studio, Jan, Outlier desktop app
  • Publisher: Outlier-Ai (solo, Mac-native AI platform)

Quickstart (mlx-lm)

pip install -U mlx-lm
python -m mlx_lm.generate \
  --model Outlier-Ai/Qwen2.5-Coder-14B-Instruct-MLX-4bit \
  --prompt "Write a haiku about compilers." \
  --max-tokens 256

Quickstart (LM Studio)

Search for Outlier-Ai/Qwen2.5-Coder-14B-Instruct-MLX-4bit, then Load and chat.

Quickstart (Outlier desktop app)

One-click install from outlier.host. Listed in the library picker under code.

Why this repo exists

This is a canonically-named twin of Outlier-Ai/Outlier-Coder-Qwen25-14B-MLX-4bit. Same weights, same quant, different name. The branded repo carries the Outlier identity; this one carries the upstream identity so it's findable via searches like "qwen2.5-coder-14b-instruct 4bit mlx". Use whichever path you came across — both point to the same files.

Provenance

  • Converted via mlx_lm.convert --q-bits 4 --q-group-size 64
  • Source: Qwen2.5-Coder-14B-Instruct
  • Date produced: 2026-04-19 (Day 19)
  • Conversion factory: Outlier MEGA-INFINITE phase3

Related

Collections

Attribution

Base model by the upstream author (see Qwen2.5-Coder-14B-Instruct). Outlier contributes the MLX 4-bit conversion and desktop-app packaging. Capability credit for the base belongs entirely to upstream.

License

Redistributed under the upstream license: apache-2.0. Respect the terms of Qwen2.5-Coder-14B-Instruct when using this conversion.

Downloads last month
151
Safetensors
Model size
15B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Outlier-Ai/Qwen2.5-Coder-14B-Instruct-MLX-4bit

Base model

Qwen/Qwen2.5-14B
Quantized
(90)
this model

Collection including Outlier-Ai/Qwen2.5-Coder-14B-Instruct-MLX-4bit