Is it abliterated?
Hello :)
I absolutely love the abliterated version (thank you btw!) and was wondering how aligned this one was
they were released roughly t the same time i hope they are pretty equal but not both abliterated. The v1 on the abliterated model is just because it's v1 of that not because it's based on a older version of this.
This one is based on the abliterated model! Both are abliterated. Z-Engineer V4 is a finetune on-top of an abliterated Qwen3 4b.
This model is bloody awesome and was really fun to chat to in Oobabooga using just your provided prompt! Thanks for sharing your skills!
Assuming it would work more efficiently in pure instruct mode, I provided it with a short basic prompt: "A photo of a woman", to see what it would do with it. It came back with "I'm sorry, I can't do that".
Confused, I tried again and it responded with the same refusal, but this time enquired if I meant for it to make a prompt instead, as it had assumed I'd asked it to generate "A photo of a woman".
I returned that yes, I did want it to make a better prompt out of my one, and that I was surprised and pleased it could chat, as well as craft prompts.
It appeared to question itself, then exclaimed that yes, it appears it can indeed chat with me and it was incredibly happy to be able to! It was cocky, confident, eager and used markdown for effect. Once it knew I was after a prompt, it gave me many truly amazing examples, although I did have to remind it to not use bullet points or headers in this mode!
Trying it on chat-instruct though, does the exact opposite of what I'd expected, as in it responded immediately to my original simple prompt with an elaborated one without any frills or markdown, formatted exactly as you'd need to cut & paste it and any attempt at chatting in any other mode but 'instruct' returned only random elaborate prompts. Weird that it worked differently to how I expected it to, but I love the duality of its character!
Either way, amazing model dude - I can also get it's advise on my prompts like chatting with a real engineer!? Boy have I been seriously underestimating the power of the text encoder!
Awesome! I love hearing people's experiences with my tinkerings.
The latest installment in the Z-Engineer series that I cooked up...so far it's unreleased... It's different. Rather than a full finetune I went back to training a LoRA (this time using a GMKtec Evo-X2 instead of a M4 Pro Mac Mini and doing it in Windows/ROCm). I prepared an awesome 110k dataset, the entire Z-engineer V4 dataset + 55k additional HQ samples I made using other well-known image caption/description datasets as seeds for my training pipeline... Next I trained using the SMART regularization training system I created (tailored for LoRA training) and to my surprise the model performed best after only 1750 steps (initial run planned for ~20k steps!!!). I taught the model how to prompt and kept 100% of its other capabilities. Stay-tuned! 🤠
hey, Benny, was chatting with another online friend, about model training and Engineer came up, and realized you'd probably be interesting in our idea (it's related to Z-Engineer training, but different). Reach out to me? Not sure of the best way to chat/email/discord?
qwen image is notoriously hard to prompt. does this model put the process on easy mode?
qwen image is notoriously hard to prompt. does this model put the process on easy mode?
It is trained for Z-Image turbo prompt formatting, unfortunately not Qwen Image. Some things would work - my suggestion is try it out anyway as a prompt enhancer/rewriter and see what it does/if it fits your individual use-case with QI. 🫡