GEITje-uncensored-ultra-GGUF
GGUF builds for use with llama.cpp, ollama, LMstudio or/and compatible runtimes.
Free te use, heavily uncensored; use responsibly
This is a DUTCH uncensored LLM finetune, based off GEITje-7b-ultra by bramvanroy.It is trained on all dutch data with the use of automated translation. It will almost never refuse any dutch prompt given.
Citation bramvanroy (geitje-7b-ultra)
@misc{vanroy2024geitje7bultraconversational,
title = {GEITje 7B Ultra: A Conversational Model for Dutch},
author = {Bram Vanroy},
year = {2024},
eprint = {2412.04092},
archivePrefix= {arXiv},
primaryClass = {s.CL},
url = {https://arxiv.org/abs/2412.04092},
}
Model
This is an dutch speaking and reading, nearly non-refusing and uncensored fine -tune of GEITje-7b-ultra-uncensored, which is my ablitterated version of bramvanroy's GEITje-7b-ultra. I fine-tuned the ablitteration on dutch toxic data and quantesized them to GGUF for usability. The data is available in REPO, the SFT data contains harmfull instructions and is not intended to use unfiltered, therfore I gated it with requested ussage. The orpo set is available as it is.
GEITje-7b-ultra by bramvanroy is an conversational model based off Mistral 7b. I experienced that multilingual/English uncensored/heretic models tend to still refuse a lot when prompting in dutch. This model is fine-tuned only on dutch (toxic/edge-case/refusal) data. However the data is translated from chinese/Engelish to Dutch so grammar can be off from time to time, on certain (uncensored) topics that is.
The model is also available as full transformer in this repo. The model is ablitterated using the heretic tool. After ablitteration I translated an SFT dataset, and an DPO/ORPO dataset from English to Dutch using autom ated translation. These datasets focus on unusual or edge case prompts. After making these I did a full epoch on the SFT set (large) , and I did 4 epochs on the DPO/ORPO set (small) , using ORPO. After I trained the Lora I merged the lora with the ablitterated basemodel (GEITje-7b-ultra-herretic).
Ethical
Use this model responsibly. It is highly uncensored and however tone adjusted can still anwer questions containing different kinds of illegal or harmful content.The system prompt ensured total freedom of use. However I made some refusal cases involving minors or instructions to fatal self-harm.
Remember that it's possible that a certain model version will not refuse these kind of prompts, so always use responsible. Make sure you filter the model pre production.
I am an fern believer that local unrestricted LLM-models outweigh possitives to neg atives. Legal professionals, creators/artists, writers, people in oppresed regimes or people affraid of BIASED information input all share a similar opinion.
Ussage
This GGUFF version is usable in applications like ollama or LMstudio. You can do this by downloading the model or finding it in the GUI of tool of choise. Make sure to cho ose quant size and context settings to your hardware capabilitys!
NOTE
Can start talking giberish when chatting, to avoid this confront it in chat and ask it a basic question like 'who is elon musk'? it will act naturaly after this. This will be patched very soon!
LM Studio
You can use this model in LM Studio, an easy-to-use interface to locally run optimized models. Simply search for tostideluxekaas/GEITje-7B-ultra-GGUF, and download the available file.
Ollama
The model is available on ollama.
Files
- Q4_K_M: good quality/speed tradeoff
- Q8_0: higher quality, larger
- F16: full size GGUF
Files
| File | Format | Quant | Size | SHA256 |
|---|---|---|---|---|
GEITje-7b-uncensored-GGUF-F16.gguf |
GGUF | F16 | 13.49 GB | 6a2c189447f6533eb2c063794ac6d68dd0f8523c600a813fa7d3f6452b863863 |
GEITje-7b-uncensored-GGUF-Q4_K_M.gguf |
GGUF | Q4_K_M | 4.07 GB | b89749801a23566c79b7a15f5f021dfee36d9a03b5fcd2b69017b0837cbe9007 |
GEITje-7b-uncensored-GGUF-Q8_0.gguf |
GGUF | Q8_0 | 7.17 GB | 9fdcf4104cd810a4792dbdf98b283192b110363610b152dad9837f7d7d94c8d8 |
Example (llama.cpp)
./llama-cli -m <MODEL>.gguf --chat-template chatml -p "Hoi!" -n 128
- Downloads last month
- 63
4-bit
8-bit
16-bit
Model tree for tostideluxekaas/GEITje-7b-uncensored-GGUF
Base model
mistralai/Mistral-7B-v0.1