Qwopus-MoE-35B-A3B-mxfp8-mlx

         arc   arc/e boolq hswag obkqa piqa  wino
mxfp8    ...coming soon
qx86-hi  0.457,0.545,0.378,0.740,0.378,0.791,0.722
qx64-hi  0.454,0.559,0.378,0.740,0.364,0.791,0.718
mxfp4    0.424,0.550,0.378,0.741,0.380,0.786,0.717

Instruct
mxfp8    0.571,0.702,0.883,0.759,0.418,0.819,0.708
qx86-hi  0.578,0.706,0.878,0.756,0.418,0.822,0.706
qx64-hi  0.581,0.713,0.870,0.756,0.424,0.819,0.706
mxfp4    0.561,0.715,0.870,0.757,0.412,0.820,0.706

Quant    Perplexity      Peak Memory  Tokens/sec
mxfp8    3.842 ± 0.024   42.65 GB     1355
qx86-hi  3.725 ± 0.022   45.50 GB     1271
qx64-hi  3.779 ± 0.023   36.83 GB     1463
mxfp4    3.997 ± 0.025   25.33 GB     1486

Similar model

Jackrong/Qwen3.5-35B-A3B-Claude-4.6-Opus-Reasoning-Distilled
         arc   arc/e boolq hswag obkqa piqa  wino
qx86-hi  0.427,0.497,0.378,0.693,0.384,0.777,0.689
Instruct
qx86-hi  0.520,0.649,0.871,0.710,0.428,0.799,0.707

Baseline model

Qwen3.5-35B-A3B-Instruct
         arc   arc/e boolq hswag obkqa piqa  wino
qx86-hi  0.554,0.670,0.891

Qwen3.5-35B-A3B-Text
qx86-hi  0.420,0.457,0.379,0.671,0.354,0.777,0.702
qx64-hi  0.413,0.459,0.378,0.670,0.366,0.772,0.687

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate

model, tokenizer = load("Qwopus-MoE-35B-A3B-mxfp8-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True, return_dict=False,
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
296
Safetensors
Model size
10B params
Tensor type
U8
·
U32
·
BF16
·
F32
·
MLX
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nightmedia/Qwopus-MoE-35B-A3B-mxfp8-mlx

Quantized
(9)
this model