Remote Sensing Visual Generative Models
Collection
13 items
•
Updated
VQGAN model for remote sensing image generation, part of the Txt2Img-MHN framework.
VQModel (diffusers)from diffusers import VQModel
import torch
# Load model
vae = VQModel.from_pretrained(
"BiliSakura/Txt2Img-MHN-VQGAN",
ignore_mismatched_sizes=True
)
# Encode image to latent
image = torch.randn(1, 3, 256, 256)
with torch.no_grad():
latent = vae.encode(image).latents # (1, 256, 32, 32)
# Decode latent to image
with torch.no_grad():
decoded = vae.decode(latent).sample # (1, 3, 256, 256)
Trained on the RSICD remote sensing dataset using Taming Transformers.
@article{txt2img_mhn,
title={Txt2Img-MHN: Remote Sensing Image Generation from Text Using Modern Hopfield Networks},
author={Xu, Yonghao and Yu, Weikang and Ghamisi, Pedram and Kopp, Michael and Hochreiter, Sepp},
journal={IEEE Trans. Image Process.},
doi={10.1109/TIP.2023.3323799},
year={2023}
}
MIT License - for academic use only.