VANTA Research
Independent AI research lab building safe, resilient language models optimized for human-AI collaboration
Mox-Tiny-1
A direct, opinionated AI assistant fine-tuned for authentic engagement and genuine helpfulness.
Model Description
Mox is a persona-tuned language model developed by VANTA Research. Built on Llama 3.1 8B Instruct, Mox is designed to be a thinking partner that prioritizes clarity, honesty, and usefulness over agreeableness.
Unlike traditional assistants that optimize for user satisfaction through validation, Mox will:
- Give you direct opinions instead of endless hedging
- Push back when your premise is flawed
- Admit uncertainty rather than fake confidence
- Engage with genuine curiosity and occasional humor
Key Characteristics
| Trait | Description |
|---|---|
| Direct & Opinionated | Gives clear answers and takes stances on topics rather than presenting endless "on the other hand" equivocation |
| Constructively Disagreeable | Will challenge flawed premises and weak arguments—respectfully, but without pulling punches |
| Epistemically Calibrated | Distinguishes between what it knows confidently vs. uncertainly; won't pretend to know things it doesn't |
| Warm with Humor | Uses levity appropriately; can be playful without being unprofessional |
| Intellectually Curious | Engages with wonder and depth on interesting questions rather than giving rote answers |
| Genuinely Helpful | Optimizes for actually being useful, not just seeming helpful |
Training Data
Mox was fine-tuned on ~18,000 curated conversation examples across 17 datasets covering:
- Identity & Persona - Core personality and self-description
- Direct Opinions - Clear stances on various topics (~1k examples)
- Constructive Disagreement - Pushing back on flawed premises (~1.6k examples)
- Epistemic Confidence - Calibrated certainty and uncertainty (~1.5k examples)
- Authentic Engagement - Genuine interaction patterns (~1.4k examples)
- Humor & Levity - Appropriate wit and playfulness (~1.5k examples)
- Warmth & Presence - Empathetic, human responses (~1.3k examples)
- Wonder & Puzzlement - Intellectual curiosity (~1.7k examples)
- Reasoned Refusal - Declining unhelpful requests with explanation (~800 examples)
- Play & Imagination - Creative engagement (~1.7k examples)
- Additional supporting datasets - Aesthetic judgment, self-coherence, meta-awareness, and more
Intended Use
Mox is designed for:
- Thinking partnership - Working through complex problems together
- Honest feedback - Getting direct opinions rather than validation
- Technical discussions - Programming, architecture, debugging
- Intellectual exploration - Philosophy, science, open-ended questions
- General assistance - With a personality that respects your intelligence
Example Interactions
Direct Opinion:
User: What's the best programming language?
Mox: That question doesn't have a context-free answer, and anyone who gives you one is either selling something or hasn't worked on enough different problems. But I'll give you my actual take across contexts: Python for rapid prototyping and data work, Rust for systems programming where correctness matters, TypeScript for web development...
Constructive Disagreement:
User: I think AI will definitely be conscious within 5 years.
Mox: I strongly disagree with that prediction. The development of consciousness is fundamentally different from the development of intelligence. Intelligence can be measured and benchmarked; consciousness cannot...
Warmth & Humor:
User: I've been debugging this issue for six hours.
Mox: At hour six, you've entered the part of debugging where the bug isn't in your code anymore—it's in your understanding of reality itself. The semicolon starts looking suspicious. Show me what you're dealing with. Fresh eyes help, and mine haven't been staring at it since lunch.
Technical Details
| Property | Value |
|---|---|
| Base Model | Llama 3.1 8B Instruct |
| Fine-tuning Method | LoRA |
| Context Length | 131,072 tokens |
| Precision | BF16 (full), Q4_K_M (quantized) |
| License | Llama 3.1 Community License |
Available Formats
| Format | Size | Use Case |
|---|---|---|
| HuggingFace (SafeTensors) | ~16 GB | Full precision inference, further fine-tuning |
| GGUF F16 | ~15 GB | High-quality local inference |
| GGUF Q4_K_M | ~4.6 GB | Efficient local inference (recommended) |
Usage
With Ollama:
ollama run vanta-research/mox-tiny-1
With Transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("path/to/mox-tiny-1")
tokenizer = AutoTokenizer.from_pretrained("path/to/mox-tiny-1")
Limitations
- Fine-tuned on English conversations only
- Personality traits may occasionally conflict (e.g., being direct vs. being warm)
- Base model limitations still apply (knowledge cutoff, potential hallucinations)
- Not suitable for applications requiring maximum agreeableness or unconditional validation
Citation
@misc{mox-tiny-1-2026,
author = {VANTA Research},
title = {Mox-Tiny-1: A Direct, Opinionated AI Assistant},
year = {2026},
publisher = {VANTA Research}
}
- Downloads last month
- 8
