Model info

Creator: https://civitai.com/user/jice

https://civitai.com/models/383364?modelVersionId=471056

creapromptLightning_creapromtHypersdxlV1_r32_r0.1_HSWQ_fp8e4m3.safetensors full model

creapromptLightning_creapromtHypersdxlV1_r32_r0.1_HSWQ_fp8e4m3_unetonly.safetensors unet only can use my NVFP4 clips with taesd vae https://github.com/madebyollin/taesd

https://civitai.com/models/383364?modelVersionId=505350

creapromptLightning_creapromptHyperCFGV2_r32_r0.1_HSWQ_fp8e4m3.safetensors full model

creapromptLightning_creapromptHyperCFGV2_r32_r0.1_HSWQ_fp8e4m3_unetonly.safetensors unet only can use my NVFP4 clips with taesd vae https://github.com/madebyollin/taesd

Hybrid-Sensitivity-Weighted-Quantization (HSWQ)

High-fidelity FP8 quantization for diffusion models (SDXL). HSWQ uses sensitivity and importance analysis instead of naive uniform cast, and offers two modes: standard-compatible (V1) and high-performance scaled (V2).

Technical details: md/HSWQ_ Hybrid Sensitivity Weighted Quantization.md

How to quantize: md/HSWQ_ How to quantize SDXL.md

SDXL Benchmark Test Results: md/SDXL Benchmark Test Results.md

Credit & Special Acknowledgement

https://github.com/ussoewwin/Hybrid-Sensitivity-Weighted-Quantization

https://github.com/tritant/ComfyUI_Kitchen_nvfp4_Converter

https://github.com/NVIDIA/Model-Optimizer

We extend our deepest respect and gratitude to the Nunchaku Team for their groundbreaking work on SVDQ quantization and for sharing their models with the community. This collection relies heavily on their research and original implementation.

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ApacheOne/HSWQ-fp8-SDXL