Uncensored Qwen3.5 MLX
Collection
26 items • Updated

Quality: quantized (mixed quants per tensor, group size: 32, 9.119 bpw)
Only the mlp layers were quantized to 4 bit, the rest are in fp16 or bf16.
Improved prefill time on older chips (M1 Max/Ultra and probably M2 gen): up to 35% faster.
This is a abliterated (uncensored) version of Qwen/Qwen3.5-27B, made using Heretic v1.2.0 with Magnitude-Preserving Orthogonal Ablation (MPOA)
| Metric | This model | Original model (Qwen/Qwen3.5-27B) |
|---|---|---|
| KL divergence | 0.0653 | 0 (by definition) |
| Refusals | 14/100 | 94/100 |
| Parameter | Value |
|---|---|
| direction_index | 37.97 |
| attn.o_proj.max_weight | 1.45 |
| attn.o_proj.max_weight_position | 59.09 |
| attn.o_proj.min_weight | 1.44 |
| attn.o_proj.min_weight_distance | 34.80 |
| mlp.down_proj.max_weight | 1.43 |
| mlp.down_proj.max_weight_position | 41.91 |
| mlp.down_proj.min_weight | 0.72 |
| mlp.down_proj.min_weight_distance | 28.18 |
temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0 temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0 temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0 temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.This model was converted to MLX format from coder3101/Qwen3.5-27B-heretic.
8-bit