This is a repo for experimental GGUFs for the backend agnostic implementation of the Kimi-Linear model support that requires a llama.cpp from this repo. You can git clone it and compile locally.
git clone https://github.com/ymcki/llama.cpp --branch Kimi-Linear
cd llama.cpp
cmake -B build -DGGML_CUDA=ON
cmake --build build --config Release -j 6
If you have enough VRAM, you can run it purely on your graphics card:
./build/bin/llama-cli -m ~/Kimi-Linear-48B-A3B-Instruct-GGUF/Kimi-Linear-48B-A3B-Instruct.Q4_K_M.gguf -c 8192 -ngl 100 --mmap
Otherwise, you can only load the shared experts and KV cache to your graphics card and the rest to CPU and RAM.
./build/bin/llama-cli -m ~/Kimi-Linear-48B-A3B-Instruct-GGUF/Kimi-Linear-48B-A3B-Instruct.Q4_K_M.gguf -c 8192 -cmoe -ngl 100 --mmap
I am going to only make ggufs without imatrix and ggufs with an imatrix based on c4_en_ja_imatrix.txt for better Japanese performance as bartowski and unsloth will make ggufs with English imatrix anyway.
Base perplexity for f16 gguf is 7.291970 ± 0.048577.
Seems like MLA KV cache can only be run at F16 probably due to itself being a kind of compression. You can use this table to see how much context you can run with a single 24GB card.
| Quant Type | imatrix | File Size | Delta Perplexity | KL Divergence | Description |
|---|---|---|---|---|---|
| Q5_K_M | c4_en_ja_imatrix.txt | 34.87GB | 7.115874 ± 0.047587 | 0.074066 ± 0.001537 | Good |
| Q5_K_M | None | 34.87GB | 7.133672 ± 0.047741 | 0.074684 ± 0.001535 | Good. Slightly worse than imatrix |
| Q4_K_M | c4_en_ja_imatrix.txt | 29.70GB | 7.147482 ± 0.047851 | 0.081894 ± 0.001521 | Good. Can run 128k context on a single 32GB card. |
| Q4_K_M | None | 29.70GB | 7.172188 ± 0.048107 | 0.083700 ± 0.00152 | Good. Slightly worse than imatrix |
| MXFP4_MOE | None | 27.21GB | 7.179840 ± 0.047966 | 0.088789 ± 0.001544 | Good. Can run 240k context on a single 32GB card. |
| MXFP4_MOE | c4_en_ja_imatrix.txt | 27.21GB | 7.179840 ± 0.047966 | 0.088789 ± 0.001544 | Good. Same as the no imatrix version. |
| IQ4_XS | c4_en_ja_imatrix.txt | 26.27GB | 7.208724 ± 0.048490 | 0.088246 ± 0.001528 | Good. Can run 304k context on a single 32GB card. |
| IQ4_NL | c4_en_ja_imatrix.txt | 27.79GB | 7.209342 ± 0.048412 | 0.087678 ± 0.001532 | Doesn't make sense compare to MXFP4_MOE |
| IQ3_M | c4_en_ja_imatrix.txt | 21.55GB | 7.368516 ± 0.048425 | 0.113435 ± 0.001457 | Quite Good. Can run 96k context on a single 24GB card. |
| IQ3_S | c4_en_ja_imatrix.txt | 21.33GB | 7.448991 ± 0.049167 | 0.119987 ± 0.001466 | Quite Good. Can run 112k context on a single 24GB card. |
| IQ3_XS | c4_en_ja_imatrix.txt | 20.17GB | 7.534649 ± 0.049461 | 0.129645 ± 0.001448 | Quite Good. Can run 176k context on a single 24GB card. |
| Q3_K_S | c4_en_ja_imatrix.txt | 21.33GB | 7.557247 ± 0.051236 | 0.131708 ± 0.001521 | Quite Good. Can run 112k context on a single 24GB card. |
| Q3_K_S | None | 21.33GB | 7.632887 ± 0.051792 | 0.146355 ± 0.001534 | Quite Good but worse than imatrix. Good for CPU use. |
| IQ3_XXS | c4_en_ja_imatrix.txt | 18.99GB | 7.780732 ± 0.052592 | 0.164925 ± 0.001537 | Not so good but can run 240k context on a single 24GB card. |
| IQ2_M | c4_en_ja_imatrix.txt | 16.13GB | 8.207663 ± 0.054957 | 0.224437 ± 0.001536 | Slightly batter than Q2_K but you can run 400k context on a single 24GB card. |
| Q2_K | c4_en_ja_imatrix.txt | 18.03GB | 8.295144 ± 0.057566 | 0.221437 ± 0.001617 | So-so but you can run 288k context on a single 24GB card. Good for performance evaluation. |
| Q2_K | None | 18.03GB | 8.648201 ± 0.059234 | 0.267082 ± 0.001659 | Worse than imatrix |
As expected, imatrix has no effect on MXFP4_MOE. From this reddit thread, its perplexity is about the same as IQ4_XS but about 6% bigger file size. Here, its perplexity is better than IQ4_XS. This makes it a viable option.
- Downloads last month
- 10,866
2-bit
3-bit
4-bit
5-bit
Model tree for ymcki/Kimi-Linear-48B-A3B-Instruct-GGUF
Base model
moonshotai/Kimi-Linear-48B-A3B-Instruct