other models
sry have not seen any other quick contact , have no x and insta ;)
https://huggingface.co/mistralai/Ministral-3-8B-Instruct-2512
also 14b
how do you deside to make your own gguf?
you take a look on intels auto-round?
maybe you can adapt some ;)
In the end, there are very few tests, with, for example, 30b models of different quantum approaches being tested in Q3... ?
I normally just let my code decide which models to convert. It just chooses the highest ranked models on huggingface. I am hardware poor so it is pretty slow (running on CPU). If you want to do it yourself I have open sourced all the code Layer bumping with llama.cpp
iam not that into python ;)
and my internet is slow ...
would be nice to have a test between iq3 your q3 (unsloth has also ?) and intels q3 with lets say qwen3-8b ...