Meta's LLaMA 7B - AWQ GGUF

These files are in GGUF format.

The model was converted by the combination of llama.cpp and quantization method AWQ

How to use models in llama.cpp

./main -m ggml-model-q4_0-awq.gguf -n 128 --prompt "Once upon a time"

Please refer to the instructions at the PR

Downloads last month
18
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support