<Model / Adapter Name>

One-paragraph summary of what this repo contains (LLM, LoRA adapter, diffusion model, etc.), what it does, and what makes it different.

Highlights

  • <Key capability 1>
  • <Key capability 2>
  • <Key constraint or requirement (e.g., base model needed, GPU recommended)>

Quickstart

Replace placeholders, then copy/paste the relevant section for your artifact type.

Option A — Use as a LoRA / PEFT Adapter (Transformers + PEFT)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig

ADAPTER_ID = "<YOUR_ORG/YOUR_ADAPTER_REPO>"      # this repo
peft_cfg = PeftConfig.from_pretrained(ADAPTER_ID)
BASE_ID = peft_cfg.base_model_name_or_path

tokenizer = AutoTokenizer.from_pretrained(BASE_ID, use_fast=True)
model = AutoModelForCausalLM.from_pretrained(
    BASE_ID,
    torch_dtype=torch.float16,
    device_map="auto",
)

model = PeftModel.from_pretrained(model, ADAPTER_ID)
model.eval()

prompt = "Write a short Indonesian summary about LoRA adapters:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    out = model.generate(**inputs, max_new_tokens=128, do_sample=True, temperature=0.7)

print(tokenizer.decode(out[0], skip_special_tokens=True))

Optional: Merge adapter into the base model (produce a standalone merged model)

# WARNING: Merging changes the weights; verify license compatibility of the base model.
merged = model.merge_and_unload()
merged.save_pretrained("./merged_model", safe_serialization=True)
tokenizer.save_pretrained("./merged_model")

Option B — Use as a Full LLM (Transformers)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

MODEL_ID = "<YOUR_ORG/YOUR_MODEL_REPO>"  # this repo
tokenizer = AutoTokenizer.from_pretrained(MODEL_ID, use_fast=True)

model = AutoModelForCausalLM.from_pretrained(
    MODEL_ID,
    torch_dtype=torch.float16,
    device_map="auto",
).eval()

prompt = "Explain Mixture of Experts in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
    out = model.generate(**inputs, max_new_tokens=128, do_sample=True, temperature=0.7)

print(tokenizer.decode(out[0], skip_special_tokens=True))

Option C — Use as a Diffusion Model / LoRA (Diffusers)

import torch
from diffusers import DiffusionPipeline

BASE_ID = "<ORG/BASE_DIFFUSION_MODEL>"           # e.g., stabilityai/stable-diffusion-xl-base-1.0
REPO_ID = "<YOUR_ORG/YOUR_REPO>"                 # this repo (full model or LoRA)
dtype = torch.float16

pipe = DiffusionPipeline.from_pretrained(BASE_ID, torch_dtype=dtype).to("cuda")

# If this repo is a LoRA:
# - Upload your weights (often *.safetensors) to this repo
# - Then load them like this:
pipe.load_lora_weights(REPO_ID)                  # optionally: weight_name="my_lora.safetensors"
# Some pipelines support:
# pipe.fuse_lora()

image = pipe("a cinematic photo of a rainy Jakarta street at night", num_inference_steps=30).images[0]
image.save("sample.png")

What’s in this repository?

Describe what you uploaded and where:

  • README.md (this file)
  • For PEFT adapters: adapter_config.json, adapter_model.safetensors (or .bin)
  • For full LLMs: config.json, model weights (e.g., model.safetensors), tokenizer files
  • For diffusion: model weights and scheduler/config files, plus example images (optional)

Model details

Model type

  • Artifact: <LoRA adapter | merged model | full model | diffusion model | diffusion LoRA>
  • Architecture: <e.g., Transformer decoder-only | U-Net | DiT | VAE>
  • Base model: <ORG/MODEL> (if applicable)
  • Languages: <en, id, ...>
  • License: <...>

Intended use

Primary use cases

  • <use case 1>
  • <use case 2>

Users & contexts

  • <recommended guardrails (if any)>

Out-of-scope use

  • <Clearly list disallowed / not recommended uses>

Training

Training data

  • Source(s): <datasets / corpora / synthetic data details>
  • Data filtering / processing: <dedup, profanity filtering, caption cleaning, etc.>
  • Known dataset limitations: <coverage gaps, language imbalance, etc.>

Training procedure

  • Objective: <SFT | DPO | fine-tune | LoRA fine-tune | DreamBooth | etc.>
  • Key hyperparameters:
    • epochs: <...>
    • batch size: <...>
    • learning rate: <...>
    • max sequence length / image size: <...>
    • LoRA config (if applicable): r=<...>, alpha=<...>, dropout=<...>, target_modules=<...>
  • Compute:
    • GPUs: <type/count>
    • mixed precision: <fp16/bf16>
    • training time: <...>
  • Reproducibility:
    • seed(s): <...>
    • training code:

Evaluation

Provide at least one of the following:

  • Automatic metrics (BLEU/ROUGE/BERTScore, perplexity, FID/CLIP score, etc.)
  • Human evaluation (protocol + summary)
  • Task-based qualitative examples

Results

Task Dataset Metric Score

Example outputs

Add a few short examples:

  • Prompt: “...”
  • Output: “...”

How to cite

If you used or built on prior work, add citations.

@misc{your_model_2025,
  title        = {<Model Name>},
  author       = {<Author/Org>},
  year         = {2025},
  howpublished = {\\url{<REPO_URL>}},
}
Downloads last month
28
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lumicero/Qwen2.5-bilingual-xlora

Base model

Qwen/Qwen2.5-0.5B
Adapter
(420)
this model

Datasets used to train lumicero/Qwen2.5-bilingual-xlora

Space using lumicero/Qwen2.5-bilingual-xlora 1

Collection including lumicero/Qwen2.5-bilingual-xlora