Text-to-Image
Diffusers
TensorBoard
Safetensors
stable-diffusion-xl
stable-diffusion-xl-diffusers
controlnet
diffusers-training
Instructions to use kwanY/EAS4 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use kwanY/EAS4 with Diffusers:
pip install -U diffusers transformers accelerate
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline controlnet = ControlNetModel.from_pretrained("kwanY/EAS4") pipe = StableDiffusionControlNetPipeline.from_pretrained( "SG161222/RealVisXL_V3.0", controlnet=controlnet ) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
from diffusers import ControlNetModel, StableDiffusionControlNetPipeline
controlnet = ControlNetModel.from_pretrained("kwanY/EAS4")
pipe = StableDiffusionControlNetPipeline.from_pretrained(
"SG161222/RealVisXL_V3.0", controlnet=controlnet
)controlnet-kwanY/EAS4
These are AlignNet weights trained on SG161222/RealVisXL_V3.0 with Pose, Expression and Sparse landmark conditions. You can find some example images below.
prompt: photo of a human
prompt: A render of a head of Disney character
prompt: a FHD head of an Orc

Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 2
Model tree for kwanY/EAS4
Base model
SG161222/RealVisXL_V3.0