Qwen3.5-9B Ultra Heretic

Quality: original bfloat16

This is a abliterated (uncensored) version of Qwen/Qwen3.5-9B, made using Heretic v1.2.0 with Magnitude-Preserving Orthogonal Ablation (MPOA) and Self-Organizing Map Abliteration (SOMA)

Performance

Metric This model Original model (Qwen3.5-9B)
KL divergence 0.1085 0 (by definition)
Refusals 2/100 86/100

Lower refusals indicate fewer content restrictions, while lower KL divergence indicates better preservation of the original model's capabilities.

Sampling Parameters:

  • I suggest using the following sets of sampling parameters depending on the mode and task type:
    • Thinking mode for general tasks:
      temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for general tasks:
      temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0
    • Instruct (or non-thinking) mode for reasoning tasks:
      temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0
  • For supported frameworks, you can adjust the presence_penalty parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.

Source

This model was converted to MLX format from llmfan46/Qwen3.5-9B-ultra-heretic using mlx-vlm version 0.4.

Downloads last month
128
Safetensors
Model size
9B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TheCluster/Qwen3.5-9B-Ultra-Heretic-MLX-bf16

Finetuned
Qwen/Qwen3.5-9B
Quantized
(6)
this model

Collection including TheCluster/Qwen3.5-9B-Ultra-Heretic-MLX-bf16