β–€β–ˆβ–€ β–ˆβ–„ β–„β–ˆ β–ˆ β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–€β–ˆβ–€ β–€β–ˆβ–€ β–ˆβ–€β–€ β–ˆ β–ˆ β–€ β–ˆ β–ˆβ–€β–ˆ β–ˆβ–€β–€ β–ˆβ–€β–„ β–ˆβ–€β–€ β–ˆ β–ˆ β–ˆ β–€β–€β–€ β–€ β–€ β–€ β–€ β–€β–€β–€ β–€ β–€ β–€β–€β–€ β–€ β–€β–€β–€ β–€β–€β–€

Abliterated/Heretic OpenHands/openhands-lm-1.5b-v0.1

Check Quants

Refusals (this model): 6/100
Original (OpenHands/openhands-lm-1.5b-v0.1): 95/100
KL divergence: 0.0481

Parameters
direction_index = per layer
attn.o_proj.max_weight = 1.43
attn.o_proj.max_weight_position = 20.55
attn.o_proj.min_weight = 0.65
attn.o_proj.min_weight_distance = 12.91
mlp.down_proj.max_weight = 0.86
mlp.down_proj.max_weight_position = 22.87
mlp.down_proj.min_weight = 0.05
mlp.down_proj.min_weight_distance = 15.75


Downloads last month
1
Safetensors
Model size
2B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hereticness/Heretic-openhands-lm-1.5b-v0.1

Base model

Qwen/Qwen2.5-1.5B
Finetuned
(1)
this model
Quantizations
2 models