littlemonster-reasoning-v2-12B-QKVO-GGUF

Quantized GGUF files for reedmayhew/littlemonster-reasoning-v2-12B-QKVO-HF.

Available Formats

Quant Type Imatrix Used
IQ3_XS Yes
Q3_K_L No
IQ3_M Yes
IQ4_XS Yes
Q4_K_M No
IQ5_XS Yes
IQ5_M Yes
Q6_K No
Q8_0 No
Downloads last month
19
GGUF
Model size
12B params
Architecture
gemma3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support