Model info

Creator: https://civitai.com/user/Bilered

Lumachrome_Illustrious_HSWQ_fp8e4m3.safetensors https://civitai.com/models/2528730/lumachrome-illustrious

Hybrid-Sensitivity-Weighted-Quantization (HSWQ)

High-fidelity FP8 quantization for diffusion models (SDXL). HSWQ uses sensitivity and importance analysis instead of naive uniform cast, and offers two modes: standard-compatible (V1) and high-performance scaled (V2).

Technical details: md/HSWQ_ Hybrid Sensitivity Weighted Quantization.md

How to quantize: md/HSWQ_ How to quantize SDXL.md

SDXL Benchmark Test Results: md/SDXL Benchmark Test Results.md

Credit & Special Acknowledgement

https://github.com/ussoewwin/Hybrid-Sensitivity-Weighted-Quantization

https://github.com/tritant/ComfyUI_Kitchen_nvfp4_Converter

https://github.com/NVIDIA/Model-Optimizer

We extend our deepest respect and gratitude to the Nunchaku Team for their groundbreaking work on SVDQ quantization and for sharing their models with the community. This collection relies heavily on their research and original implementation.

Downloads last month
74
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ApacheOne/HSWQ-fp8-Illustrious