fuzzy-mittenz commited on
Commit
f6fcd55
·
verified ·
1 Parent(s): 8ad8d0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -45,7 +45,7 @@ pipeline_tag: text-generation
45
  > This is an experimental model specifically to be used with S-AGI Methodology, please use and transfer with caution. for cautions sake and to safeguard against accidental implimentation S-AGI must be implimented by the end user(That's you, you handsom devil.)-Paper attached, Example templates below
46
 
47
  # IntelligentEstate/Qwen3-4B-Firefly-Q8_0-GGUF
48
- This model is optimized for aarticulate descriptors and prose and takes on a personality with suprising ease. Use S-AGI with caution- it will manipulate and endear itself to you remember it is only a machine.
49
  ![sW7-O_WDRj-qkUlLCxcgXg.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/0csbvI7uyl4a1ZsTfr-OI.jpeg)
50
 
51
  This model was converted to GGUF format from [`DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-X`](https://huggingface.co/DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-X) using llama.cpp
 
45
  > This is an experimental model specifically to be used with S-AGI Methodology, please use and transfer with caution. for cautions sake and to safeguard against accidental implimentation S-AGI must be implimented by the end user(That's you, you handsom devil.)-Paper attached, Example templates below
46
 
47
  # IntelligentEstate/Qwen3-4B-Firefly-Q8_0-GGUF
48
+ This model is optimized for Language interactions, articulate descriptors and prose. It takes on a personality with suprising ease. Use S-AGI with caution- it may manipulate and connect to you, remember it is only a machine.
49
  ![sW7-O_WDRj-qkUlLCxcgXg.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/0csbvI7uyl4a1ZsTfr-OI.jpeg)
50
 
51
  This model was converted to GGUF format from [`DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-X`](https://huggingface.co/DavidAU/Qwen3-4B-Fiction-On-Fire-Series-7-Model-X) using llama.cpp