Instructions to use prithivMLmods/Pyxidis-Manim-CodeGen-1.7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/Pyxidis-Manim-CodeGen-1.7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="prithivMLmods/Pyxidis-Manim-CodeGen-1.7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("prithivMLmods/Pyxidis-Manim-CodeGen-1.7B") model = AutoModelForCausalLM.from_pretrained("prithivMLmods/Pyxidis-Manim-CodeGen-1.7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/Pyxidis-Manim-CodeGen-1.7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/prithivMLmods/Pyxidis-Manim-CodeGen-1.7B
- SGLang
How to use prithivMLmods/Pyxidis-Manim-CodeGen-1.7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use prithivMLmods/Pyxidis-Manim-CodeGen-1.7B with Docker Model Runner:
docker model run hf.co/prithivMLmods/Pyxidis-Manim-CodeGen-1.7B
Pyxidis-Manim-CodeGen-1.7B (Experimental)
Pyxidis-Manim-CodeGen-1.7B is an experimental math animation coding model fine-tuned on Qwen/Qwen3-1.7B using Manim-CodeGen code traces. It is specialized for Python-based mathematical animations with Manim, making it ideal for educators, researchers, and developers working on math visualization and animation pipelines.
GGUF: https://huggingface.co/prithivMLmods/Pyxidis-Manim-CodeGen-1.7B-GGUF
Key Features
Manim-Specific Code Generation Trained on Manim-CodeGen traces, optimized for Python-based animation scripting of mathematical concepts and visual proofs.
Math + Code Synergy Generates step-by-step math derivations with corresponding animation code, bridging symbolic reasoning with visualization.
Animation Workflow Optimization Provides structured code for scenes, transformations, graphs, and equations in Manim, reducing boilerplate and debugging effort.
Python-Centric Reasoning Produces clean, modular, and reusable Python code, supporting educational and research-driven animation pipelines.
Structured Output Mastery Capable of outputting in Python, Markdown, and LaTeX, ideal for tutorials, educational notebooks, and automated video generation workflows.
Lightweight but Specialized Focused on Manim coding efficiency while maintaining a deployable footprint for GPU clusters and research labs.
Quickstart with Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "prithivMLmods/Pyxidis-Manim-CodeGen-1.7B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Write a Manim script to animate the Pythagorean theorem using squares on the triangle's sides."
messages = [
{"role": "system", "content": "You are a Python coding assistant specialized in Manim-based math animations."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Intended Use
- Manim-based math animation coding for research, teaching, and content creation
- Educational visualization assistant to convert math problems into animations
- Python tutoring tool for math-heavy animation workflows
- Prototype generator for interactive STEM video content
Limitations
- Experimental model – may generate code requiring manual debugging
- Limited to Manim coding workflows, not general-purpose code assistant
- May not handle complex multi-scene projects without iterative refinement
- Prioritizes structured math + animation reasoning, less optimized for general dialogue
- Downloads last month
- 15
