Whisper Large-v3 Fine-tuned for Thur

This model is a fine-tuned version of openai/whisper-large-v3 on the Mozilla Common Voice Spontaneous Speech dataset for Thur (lth).

Training

  • Base model: openai/whisper-large-v3
  • Fine-tuning method: Full fine-tuning (seq2seq cross-entropy)
  • Whisper language token: english
  • Dataset: Mozilla Common Voice Spontaneous Speech

Usage

from transformers import WhisperForConditionalGeneration, WhisperProcessor
import torch

processor = WhisperProcessor.from_pretrained("vitthalbhandari/whisper-large-v3-aft-all-lth")
model = WhisperForConditionalGeneration.from_pretrained("vitthalbhandari/whisper-large-v3-aft-all-lth")

inputs = processor(audio_array, sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
    generated_ids = model.generate(**inputs)
transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
Downloads last month
32
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support