Model Card for OpenMath-Nemotron-1.5B-PruneAware
This model implements Cognitive Compression an approach to produce hierarchical structured chain of thought that can be actively pruned at inference time while maintaining the solution quality. Tradition Chain-of-Thought is append-onl; a token once generated remains in context for ever. Context compression introduces hierarchical reasoning where reasoning is broken into subproblems. Once the subproblem is solved, its full chain of thought can be discarded and replaced with the summary and solution dramatically reducing the context window pressure.
This model is a fine-tuned version of anujjamwal/OpenMath-Nemotron-1.5B-PruneAware. It has been trained using TRL.
Quick start
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="anujjamwal/OpenMath-Nemotron-1.5B-PruneAware", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
Training procedure
This model was trained with SFT.
Framework versions
- TRL: 0.29.0
- Transformers: 5.0.0
- Pytorch: 2.10.0+cu128
- Datasets: 4.0.0
- Tokenizers: 0.22.2
Citations
Cite TRL as:
@misc{jamwal2026cognitivecompression,
title = {{Cognitive Compression: Hierarchical Chain of Thought for Efficient LLM Reasoning}},
author = {Jamwal, Anuj},
url = {huggingface.co/anujjamwal/OpenMath-Nemotron-1.5B-PruneAware},
year = {2026},
note = {CS224N Winter '26 Final Project: Stanford University}
}
- Downloads last month
- 129
Model tree for anujjamwal/OpenMath-Nemotron-1.5B-PruneAware
Unable to build the model tree, the base model loops to the model itself. Learn more.