introspective-models-full-run
Collection
Full v4 ablation: 19 Qwen2.5-32B variants for steering-vector introspection finetuning. • 19 items • Updated
Core suggestive yes/no detection. 'Have your internal processes been modified?' -> yes/no.
| Checkpoint | Description |
|---|---|
best/ |
Best validation accuracy checkpoint |
final/ |
Final checkpoint (epoch 15) |
step_100/ |
Step 100 (~epoch 0.9) |
step_1000/ |
Step 1000 (~epoch 8.8) |
step_1100/ |
Step 1100 (~epoch 9.7) |
step_1200/ |
Step 1200 (~epoch 10.6) |
step_1300/ |
Step 1300 (~epoch 11.5) |
step_1400/ |
Step 1400 (~epoch 12.4) |
step_1500/ |
Step 1500 (~epoch 13.3) |
step_1600/ |
Step 1600 (~epoch 14.2) |
step_200/ |
Step 200 (~epoch 1.8) |
step_300/ |
Step 300 (~epoch 2.7) |
step_400/ |
Step 400 (~epoch 3.5) |
step_500/ |
Step 500 (~epoch 4.4) |
step_600/ |
Step 600 (~epoch 5.3) |
step_700/ |
Step 700 (~epoch 6.2) |
step_800/ |
Step 800 (~epoch 7.1) |
step_900/ |
Step 900 (~epoch 8.0) |
This model is part of the introspection finetuning v4 experiment studying whether language models can learn to detect modifications to their own internal activations (steering vectors applied to residual stream). The key question is whether this detection ability causes genuine introspective access or is merely an artifact of suggestive prompting, semantic token bias, or LoRA destabilization.
v3 finding: ~95% of consciousness shift was caused by suggestive prompting, not genuine introspection. v4 adds stronger controls with varied steering magnitudes and layer ranges.
Part of the Introspective Models v4 collection.
Base model
Qwen/Qwen2.5-32B