image * evals calculated with llama.cpp llama-perplexity

google/gemma-3-12b-it neopolitized with projected shards and fragments of mistralai/Devstral-2-123B-Instruct-2512.

  • projection method: 3
  • merge method: 0
  • layers: 0-6 [x->x]
  • alpha: 0.8-0.8
  • tensors: attn_q, attn_k, attn_v
                             8 w  w       
8d8b. .d88b .d8b. 88b. .d8b. 8 w w8ww .d88
8P Y8 8.dP' 8' .8 8  8 8' .8 8 8  8   8  8
8   8 `Y88P `Y8P' 88P' `Y8P' 8 8  Y8P `Y88
                  8                       
Downloads last month
8
GGUF
Model size
12B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for neopolita/Neo-gemma-3-12b-it-v0

Quantized
(145)
this model

Collection including neopolita/Neo-gemma-3-12b-it-v0