Emoji Gemma 3 ๐ (Web Ready)
This is a fine-tuned version of Gemma 3 270M specialized in translating text into emojis. It has been converted to ONNX format to run directly in the browser using Transformers.js.
๐ฎ Play with it
View Live Demo
(Replace this link with your actual GitHub Pages URL)
๐ป Usage in JavaScript (Transformers.js)
You can run this model entirely client-side without a backend!
import { pipeline } from 'https://cdn.jsdelivr.net/npm/@huggingface/transformers@latest';
// Load the pipeline (uses WebGPU if available, or WASM)
const emojifier = await pipeline('text-generation', 'NathanHannon/emoji_gemma3.270m', {
dtype: 'q8', // Use the quantized model for speed & low memory
});
// Run inference
// Note: We use manual formatting here to ensure compatibility
const text = "I am so happy to see you!";
const prompt = `<start_of_turn>user\n${text}<end_of_turn>\n<start_of_turn>model\n`;
const output = await emojifier(prompt, {
max_new_tokens: 20,
do_sample: true,
temperature: 0.6,
});
console.log(output[0].generated_text);
๐ Model Details
- Base Model: google/gemma-3-270m-it
- Task: Text-to-Emoji Translation
- Dataset:
kr15t3n/text2emoji - Format: ONNX Weights (Int8 Quantized + FP32)
๐ Files Included
model_quantized.onnx: ~200MB (Recommended for Web/Mobile)model.onnx: ~1GB (Full FP32 Precision)tokenizer.json&config.json: Standard tokenizer files
๐ง Training
Finetuned using LoRA (Low-Rank Adaptation) on an RTX 4070.
- R: 16
- Alpha: 32
- Quantization: 4-bit (during training), exported to ONNX Int8.