How to use from
Docker Model Runner
docker model run hf.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.5bpw
Quick Links

Configuration Parsing Warning:In config.json: "quantization_config.bits" must be an integer

README.md exists but content is empty.
Downloads last month
3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha_EXL2_3.5bpw

Quantized
(15)
this model