Model Card for Model ID

This model is a finetuned version of Meta's LLama 3.1 70B. Trained with unsloth. This model is finetuned for 2026-2027's Science Fair competetion.

  • Developed by: Monkebest
  • Model type: Text-Generation
  • Language(s) (NLP): Transformers
  • License: Apache License 2.0

Uses

This model aims to test the differences between character fine-tuned LLama 3.1 8B vs 70B. The repo owner will be the main user of this model in a research paper. This model likely won't have a significant impact on other users.

Direct Use

N/A

Out-of-Scope Use

This model is not likely to be used malicously or incorrectly due to its high focus on playing a character that isn't even state-of-the-art level.

Bias, Risks, and Limitations

This model's training dataset is only 5000, synthesized from a human-written 200 entries. Limiting its performance in many categories.

Recommendations

I do not reccomend to use this model outside of its purpose as a simple comparsion tool from the 8B finetuned version. As it is very limited in all other categories that isn't character building.

  • PEFT 0.18.0
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for monkebest/Character_finetuned_unsloth_Llama3.1_70B_4bit

Adapter
(7)
this model