gemma-4-E4B-it-mxfp4-mlx

Brainwaves

         arc   arc/e boolq hswag obkqa piqa  wino
bf16     0.490,0.674,0.793,0.612,0.416,0.756,0.669
mxfp8    0.480,0.656,0.797,0.608,0.400,0.755,0.665
mxfp4    0.455,0.607,0.851,0.585,0.402,0.744,0.651

Quant    Perplexity      Peak Memory   Tokens/sec
mxfp8    35.937 ± 0.525  14.80 GB      1153
mxfp4    36.746 ± 0.534  11.06 GB      1030

The model chat template has been updated with the latest Jinja from Google

See parent model for instructions on install and use with Transformers.

-G

Downloads last month
3,358
Safetensors
Model size
2B params
Tensor type
BF16
·
U8
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/gemma-4-E4B-it-mxfp4-mlx

Quantized
(99)
this model

Collection including nightmedia/gemma-4-E4B-it-mxfp4-mlx