--- library_name: transformers license: gemma base_model: google/gemma-3-270m-it tags: - generated_from_trainer datasets: - HuggingFaceH4/CodeAlpaca_20K model-index: - name: outputs/gemma-3-270m-codealpaca-finetune results: [] --- [Built with Axolotl](https://github.com/axolotl-ai-cloud/axolotl)
See axolotl config axolotl version: `0.12.0.dev0` ```yaml base_model: google/gemma-3-270m-it model_type: AutoModelForCausalLM tokenizer_type: AutoTokenizer ddp_find_unused_parameters: true load_in_8bit: false load_in_4bit: false chat_template: gemma3 eot_tokens: - "" datasets: - path: HuggingFaceH4/CodeAlpaca_20K type: field_instruction: prompt field_input: input field_output: output format: | Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Input: {input} ### Response: no_input_format: | Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: {instruction} ### Response: val_set_size: 0.05 # Use 5% of the data for validation output_dir: ./outputs/gemma-3-270m-codealpaca-finetune sequence_len: 2048 sample_packing: true eval_sample_packing: false gradient_accumulation_steps: 4 micro_batch_size: 1 num_epochs: 3 optimizer: adamw_torch lr_scheduler: cosine learning_rate: 0.00002 bf16: true tf32: true gradient_checkpointing: true gradient_checkpointing_kwargs: use_reentrant: false resume_from_checkpoint: logging_steps: 1 flash_attention: true warmup_ratio: 0.1 evals_per_epoch: 1 saves_per_epoch: 1 weight_decay: 0.0 special_tokens: ```

# outputs/gemma-3-270m-codealpaca-finetune This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it) on the HuggingFaceH4/CodeAlpaca_20K dataset. It achieves the following results on the evaluation set: - Loss: nan - Memory/max Memory Active(gib): 8.51 - Memory/max Memory Allocated(gib): 8.51 - Memory/device Memory Reserved(gib): 10.27 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 34 - training_steps: 348 ### Training results | Training Loss | Epoch | Step | Validation Loss | Memory Active(gib) | Memory Allocated(gib) | Memory Reserved(gib) | |:-------------:|:------:|:----:|:---------------:|:------------------:|:---------------------:|:--------------------:| | No log | 0 | 0 | nan | 5.84 | 5.84 | 5.86 | | 0.0 | 0.9978 | 116 | nan | 8.51 | 8.51 | 10.27 | | 0.0 | 1.9892 | 232 | nan | 8.51 | 8.51 | 10.27 | | 0.0 | 2.9806 | 348 | nan | 8.51 | 8.51 | 10.27 | ### Framework versions - Transformers 4.55.0 - Pytorch 2.6.0+cu126 - Datasets 4.0.0 - Tokenizers 0.21.4