Txt2Img-MHN-VQGAN

VQGAN model for remote sensing image generation, part of the Txt2Img-MHN framework.

Model Details

  • Class: VQModel (diffusers)
  • Input/Output: 3-channel RGB images (256×256)
  • Latent channels: 256
  • Codebook: 1024 embeddings (256-dim)
  • Parameters: ~72M

Usage

from diffusers import VQModel
import torch

# Load model
vae = VQModel.from_pretrained(
    "BiliSakura/Txt2Img-MHN-VQGAN",
    ignore_mismatched_sizes=True
)

# Encode image to latent
image = torch.randn(1, 3, 256, 256)
with torch.no_grad():
    latent = vae.encode(image).latents  # (1, 256, 32, 32)

# Decode latent to image
with torch.no_grad():
    decoded = vae.decode(latent).sample  # (1, 3, 256, 256)

Training

Trained on the RSICD remote sensing dataset using Taming Transformers.

Citation

@article{txt2img_mhn,
  title={Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks},
  author={Xu, Yonghao and Yu, Weikang and Ghamisi, Pedram and Kopp, Michael and Hochreiter, Sepp},
  journal={IEEE Trans. Image Process.}, 
  doi={10.1109/TIP.2023.3323799},
  year={2023}
}

Links

License

MIT License - for academic use only.

Downloads last month
12
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including BiliSakura/Txt2Img-MHN-VQGAN

Paper for BiliSakura/Txt2Img-MHN-VQGAN