Update README.md
0b0bcd1 verified - 1.52 kB initial commit
- 5.99 kB Update README.md
- 931 MB Add HuggingFace format full model + text encoder only
- 323 MB Add HuggingFace format full model + text encoder only
ViT-L-14-BEST-smooth-GmP-ft-pickle-OpenAI.pt Detected Pickle imports (18)
- "torch._utils._rebuild_parameter",
- "clip.model.VisionTransformer",
- "clip.model.LayerNorm",
- "torch._utils._rebuild_tensor_v2",
- "__builtin__.set",
- "torch.nn.modules.activation.MultiheadAttention",
- "torch.nn.modules.linear.Linear",
- "torch.nn.modules.sparse.Embedding",
- "torch.HalfStorage",
- "torch.nn.modules.container.Sequential",
- "collections.OrderedDict",
- "torch.FloatStorage",
- "clip.model.ResidualAttentionBlock",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "clip.model.Transformer",
- "clip.model.CLIP",
- "torch.nn.modules.conv.Conv2d",
- "clip.model.QuickGELU"
How to fix it?
932 MB Add new #1 top-performing smooth-GmP model - 932 MB Add new #1 top-performing smooth-GmP model
- 932 MB Add new #1 top-performing smooth-GmP model
- 325 MB ViT-L Text Encoder "same as HF / used in SDXL"
ViT-L-14-GmP-ft-pickle-OpenAI.pt Detected Pickle imports (18)
- "torch._utils._rebuild_parameter",
- "clip.model.VisionTransformer",
- "clip.model.LayerNorm",
- "torch._utils._rebuild_tensor_v2",
- "__builtin__.set",
- "torch.nn.modules.activation.MultiheadAttention",
- "torch.nn.modules.linear.Linear",
- "torch.nn.modules.sparse.Embedding",
- "torch.HalfStorage",
- "torch.nn.modules.container.Sequential",
- "collections.OrderedDict",
- "torch.FloatStorage",
- "clip.model.ResidualAttentionBlock",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "clip.model.Transformer",
- "clip.model.CLIP",
- "torch.nn.modules.conv.Conv2d",
- "clip.model.QuickGELU"
How to fix it?
932 MB Added .safetensors and full model object pickle - 1.71 GB Upload ViT-L-14-GmP-ft-state_dict.pt
- 932 MB Added .safetensors and full model object pickle
- 931 MB Greatly improved TEXT + Detail (as CLIP-L for Flux.1)
- 323 MB Greatly improved TEXT + Detail (as CLIP-L for Flux.1)
ViT-L-14-TEXT-detail-improved-hiT-GmP-pickle-OpenAI.pt Detected Pickle imports (18)
- "clip.model.VisionTransformer",
- "torch.HalfStorage",
- "__builtin__.set",
- "torch.nn.modules.linear.NonDynamicallyQuantizableLinear",
- "torch.nn.modules.linear.Linear",
- "clip.model.Transformer",
- "torch.FloatStorage",
- "clip.model.ResidualAttentionBlock",
- "clip.model.CLIP",
- "collections.OrderedDict",
- "torch._utils._rebuild_tensor_v2",
- "torch.nn.modules.activation.MultiheadAttention",
- "clip.model.QuickGELU",
- "torch.nn.modules.container.Sequential",
- "clip.model.LayerNorm",
- "torch.nn.modules.conv.Conv2d",
- "torch.nn.modules.sparse.Embedding",
- "torch._utils._rebuild_parameter"
How to fix it?
932 MB Greatly improved TEXT + Detail (as CLIP-L for Flux.1) - 932 MB Greatly improved TEXT + Detail (as CLIP-L for Flux.1)
- 736 Bytes Finally working: Redundant TEXT model for HF inference
- 525 kB Sigh :)
- 1.71 GB Finally working: Redundant TEXT model for HF inference
- 316 Bytes From openai/clip-vit-large-patch14
- 389 Bytes From openai/clip-vit-large-patch14
- 2.22 MB From openai/clip-vit-large-patch14
- 902 Bytes From openai/clip-vit-large-patch14
- 961 kB From openai/clip-vit-large-patch14