β–€β–ˆβ–€ β–ˆβ–„ β–„β–ˆ β–ˆ β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–€β–ˆβ–€ β–€β–ˆβ–€ β–ˆβ–€β–€ β–ˆ β–ˆ β–€ β–ˆ β–ˆβ–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–ˆ β–ˆ β–ˆ β–€β–€β–€ β–€ β–€ β–€ β–€ β–€β–€β–€ β–€ β–€ β–€β–€β–€ β–€ β–€β–€β–€ β–€β–€β–€

Abliterated/Heretic DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B

Check Quants

Refusals (this model): 9/100
Original (DavidAU/Qwen3-Zero-Coder-Reasoning-V2-0.8B): 32/100
KL divergence: 0.0070

Parameters
direction_index = 28.06
attn.o_proj.max_weight = 0.98
attn.o_proj.max_weight_position = 28.66
attn.o_proj.min_weight = 0.50
attn.o_proj.min_weight_distance = 20.80
mlp.down_proj.max_weight = 1.14
mlp.down_proj.max_weight_position = 25.19
mlp.down_proj.min_weight = 0.42
mlp.down_proj.min_weight_distance = 4.25


Downloads last month
8
Safetensors
Model size
0.8B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hereticness/Heretic-Qwen3-Zero-Coder-Reasoning-V2-0.8B

Finetuned
(2)
this model
Quantizations
2 models