mlx-community/LTX-2-dev-bf16

This model was converted to MLX format from Lightricks/LTX-2 using mlx-video version 0.0.1.

Refer to the original model card for more details on the model.

Use with mlx-video

Installation

Install from source:

Option 1: Install with pip (requires git):

pip install git+https://github.com/Blaizzy/mlx-video.git

Option 2: Install with uv (ultra-fast package manager, optional):

uv pip install git+https://github.com/Blaizzy/mlx-video.git

Quick Start

Text-to-Video (T2V) Generation

python -m mlx_video.generate_dev \
    --prompt "Two dogs of the poodle breed wearing sunglasses, close up, cinematic, sunset" \
    --num-frames 100 \
    --width 768 \
    --model-repo mlx-community/LTX-2-dev-bf16

Image-to-Video (I2V) Conditioning

Condition video generation on an input image:

# First frame conditioning
python -m mlx_video.generate_dev \
    --prompt "A cat walking across a sunny garden" \
    --image cat.jpg \
    --image-strength 1.0 \
    --image-frame-idx 0 \
    --model-repo mlx-community/LTX-2-dev-bf16
# Middle frame conditioning
python -m mlx_video.generate_dev \
    --prompt "A person turning around" \
    --image person.jpg \
    --image-frame-idx 16 \
    --num-frames 33 \
    --model-repo mlx-community/LTX-2-dev-bf16

Audio-Video Generation

Generates synchronized video and audio.

python -m mlx_video.generate_dev \
    --prompt "Ocean waves crashing on rocks, seagulls calling" \
    --height 512 \
    --width 512 \
    --num-frames 65 \
    --output-path output_av.mp4 \
    --output-audio output.wav \
    --model-repo mlx-community/LTX-2-dev-bf16

Python API

Basic Video Generation

from mlx_video.generate_dev import generate_video_dev

# Generate a video
generate_video_dev(
    model_repo="mlx-community/LTX-2-dev-bf16",
    prompt="A beautiful sunset over the ocean",
    height=512,
    width=768,
    num_frames=65,
    seed=42,
    fps=24,
    output_path="output.mp4",
)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support