MegaScience/MegaScience
Viewer • Updated • 1.25M • 16.7k • 131
How to use MegaScience/Qwen2.5-3B-MegaScience with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="MegaScience/Qwen2.5-3B-MegaScience")
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MegaScience/Qwen2.5-3B-MegaScience")
model = AutoModelForCausalLM.from_pretrained("MegaScience/Qwen2.5-3B-MegaScience")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use MegaScience/Qwen2.5-3B-MegaScience with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "MegaScience/Qwen2.5-3B-MegaScience"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MegaScience/Qwen2.5-3B-MegaScience",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/MegaScience/Qwen2.5-3B-MegaScience
How to use MegaScience/Qwen2.5-3B-MegaScience with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "MegaScience/Qwen2.5-3B-MegaScience" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MegaScience/Qwen2.5-3B-MegaScience",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "MegaScience/Qwen2.5-3B-MegaScience" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "MegaScience/Qwen2.5-3B-MegaScience",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use MegaScience/Qwen2.5-3B-MegaScience with Docker Model Runner:
docker model run hf.co/MegaScience/Qwen2.5-3B-MegaScience
This repository contains the Qwen2.5-3B-MegaScience model, one of the models trained as part of the MegaScience project.
For the official code, data processing pipeline, and evaluation system, please refer to the MegaScience GitHub repository.
You can use this model with the Hugging Face transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "MegaScience/Qwen2.5-3B-MegaScience"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
# Example text generation
prompt = "The capital of France is"
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt")
generated_ids = model.generate(model_inputs.input_ids, max_new_tokens=20)
print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0])
Check out our paper for more details. If you use our dataset or find our work useful, please cite
@article{fan2025megascience,
title={MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning},
author={Fan, Run-Ze and Wang, Zengzhi and Liu, Pengfei},
year={2025},
journal={arXiv preprint arXiv:2507.16812},
url={https://arxiv.org/abs/2507.16812}
}