LlamaEdge compatible quants for SmolVLM2 models.
AI & ML interests
Run open source LLMs across CPU and GPU without changing the binary in Rust and Wasm locally!
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 27B • Updated • 256 -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 585 • 2 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 497 -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 282
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 22.7k • 29 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 258 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 17k • 10 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 21.3k • 12
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 242 • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 161 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 1.61k • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 154 • 1
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 162 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 145 • 3 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 445 • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 206 • 3
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 269 -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 200 • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 12.8k • 1 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 80
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 46 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 81 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 39 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 56
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 131 • 1 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 220 -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 118 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 177
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 235 • 1 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 97 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 164 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 279
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 83.8M • Updated • 1.31k • 15 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.2k • 12 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 64.5M • Updated • 215 • 14 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 191 • 13
LlamaEdge compatible quants for SmolVLM2 models.
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 27B • Updated • 256 -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 585 • 2 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 497 -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 282
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 269 -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 200 • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 12.8k • 1 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 80
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 22.7k • 29 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 258 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 17k • 10 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 21.3k • 12
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 46 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 81 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 39 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 56
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 131 • 1 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 220 -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 118 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 177
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 242 • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 161 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 1.61k • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 154 • 1
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 235 • 1 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 97 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 164 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 279
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 162 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 145 • 3 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 445 • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 206 • 3
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 83.8M • Updated • 1.31k • 15 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.2k • 12 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 64.5M • Updated • 215 • 14 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 191 • 13